Abstract: Chaiyaphum Starch Co. Ltd. is one of many starch
manufacturers that has introduced machinery to aid in manufacturing.
Even though machinery has replaced many elements and is now a
significant part in manufacturing processes, problems that must be
solved with respect to current process flow to increase efficiency still
exist. The paper-s aim is to increase productivity while maintaining
desired quality of starch, by redesigning the flipping machine-s
mechanical control system which has grossly low functional lifetime.
Such problems stem from the mechanical control system-s bearings,
as fluids and humidity can access into said bearing directly, in
tandem with vibrations from the machine-s function itself. The wheel
which is used to sense starch thickness occasionally falls from its
shaft, due to high speed rotation during operation, while the shaft
may bend from impact when processing dried bread. Redesigning its
mechanical control system has increased its efficiency, allowing
quality thickness measurement while increasing functional lifetime
an additional 62 days.
Abstract: Biologically human brain processes information in both unimodal and multimodal approaches. In fact, information is progressively abstracted and seamlessly fused. Subsequently, the fusion of multimodal inputs allows a holistic understanding of a problem. The proliferation of technology has exponentially produced various sources of data, which could be likened to being the state of multimodality in human brain. Therefore, this is an inspiration to develop a methodology for exploring multimodal data and further identifying multi-view patterns. Specifically, we propose a brain inspired conceptual model that allows exploration and identification of patterns at different levels of granularity, different types of hierarchies and different types of modalities. A structurally adaptive neural network is deployed to implement the proposed model. Furthermore, the acquisition of multi-view patterns with the proposed model is demonstrated and discussed with some experimental results.
Abstract: Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.
Abstract: Active power filter continues to be a powerful tool to control harmonics in power systems thereby enhancing the power quality. This paper presents a fuzzy tuned PID controller based shunt active filter to diminish the harmonics caused by non linear loads like thyristor bridge rectifiers and imbalanced loads. Here Fuzzy controller provides the tuning of PID, based on firing of thyristor bridge rectifiers and variations in input rms current. The shunt APF system is implemented with three phase current controlled Voltage Source Inverter (VSI) and is connected at the point of common coupling for compensating the current harmonics by injecting equal but opposite filter currents. These controllers are capable of controlling dc-side capacitor voltage and estimating reference currents. Hysteresis Current Controller (HCC) is used to generate switching signals for the voltage source inverter. Simulation studies are carried out with non linear loads like thyristor bridge rectifier along with unbalanced loads and the results proved that the APF along with fuzzy tuned PID controller work flawlessly for different firing angles of non linear load.
Abstract: Various security APIs (Application Programming
Interfaces) are being used in a variety of application areas requiring
the information security function. However, these standards are not
compatible, and the developer must use those APIs selectively
depending on the application environment or the programming
language. To resolve this problem, we propose the standard draft of
the information security component, while SSL (Secure Sockets
Layer) using the confidentiality and integrity component interface has
been implemented to verify validity of the standard proposal. The
implemented SSL uses the lower-level SSL component when
establishing the RMI (Remote Method Invocation) communication
between components, as if the security algorithm had been
implemented by adding one more layer on the TCP/IP.
Abstract: This paper presents anti-synchronization of chaos
between two different chaotic systems using active control method.
The proposed technique is applied to achieve chaos antisynchronization
for the Lü and Rössler dynamical systems.
Numerical simulations are implemented to verify the results.
Abstract: In recent years image watermarking has become an
important research area in data security, confidentiality and image
integrity. Many watermarking techniques were proposed for medical
images. However, medical images, unlike most of images, require
extreme care when embedding additional data within them because
the additional information must not affect the image quality and
readability. Also the medical records, electronic or not, are linked to
the medical secrecy, for that reason, the records must be confidential.
To fulfill those requirements, this paper presents a lossless
watermarking scheme for DICOM images. The proposed a fragile
scheme combines two reversible techniques based on difference
expansion for patient's data hiding and protecting the region of
interest (ROI) with tamper detection and recovery capability.
Patient's data are embedded into ROI, while recovery data are
embedded into region of non-interest (RONI). The experimental
results show that the original image can be exactly extracted from the
watermarked one in case of no tampering. In case of tampered ROI,
tampered area can be localized and recovered with a high quality
version of the original area.
Abstract: Lossless compression schemes with secure
transmission play a key role in telemedicine applications that helps in
accurate diagnosis and research. Traditional cryptographic algorithms
for data security are not fast enough to process vast amount of data.
Hence a novel Secured lossless compression approach proposed in
this paper is based on reversible integer wavelet transform, EZW
algorithm, new modified runlength coding for character
representation and selective bit scrambling. The use of the lifting
scheme allows generating truly lossless integer-to-integer wavelet
transforms. Images are compressed/decompressed by well-known
EZW algorithm. The proposed modified runlength coding greatly
improves the compression performance and also increases the
security level. This work employs scrambling method which is fast,
simple to implement and it provides security. Lossless compression
ratios and distortion performance of this proposed method are found
to be better than other lossless techniques.
Abstract: In this paper the authors propose a protocol, which uses Elliptic Curve Cryptography (ECC) based on the ElGamal-s algorithm, for sending small amounts of data via an authentication server. The innovation of this approach is that there is no need for a symmetric algorithm or a safe communication channel such as SSL. The reason that ECC has been chosen instead of RSA is that it provides a methodology for obtaining high-speed implementations of authentication protocols and encrypted mail techniques while using fewer bits for the keys. This means that ECC systems require smaller chip size and less power consumption. The proposed protocol has been implemented in Java to analyse its features and vulnerabilities in the real world.
Abstract: Rapid advancement in computing technology brings
computers and humans to be seamlessly integrated in future. The
emergence of smartphone has driven computing era towards
ubiquitous and pervasive computing. Recognizing human activity has
garnered a lot of interest and has raised significant researches-
concerns in identifying contextual information useful to human
activity recognition. Not only unobtrusive to users in daily life,
smartphone has embedded built-in sensors that capable to sense
contextual information of its users supported with wide range
capability of network connections. In this paper, we will discuss the
classification algorithms used in smartphone-based human activity.
Existing technologies pertaining to smartphone-based researches in
human activity recognition will be highlighted and discussed. Our
paper will also present our findings and opinions to formulate
improvement ideas in current researches- trends. Understanding
research trends will enable researchers to have clearer research
direction and common vision on latest smartphone-based human
activity recognition area.
Abstract: In this paper, an improvement of PDLZW implementation
with a new dictionary updating technique is proposed. A
unique dictionary is partitioned into hierarchical variable word-width
dictionaries. This allows us to search through dictionaries in parallel.
Moreover, the barrel shifter is adopted for loading a new input string
into the shift register in order to achieve a faster speed. However,
the original PDLZW uses a simple FIFO update strategy, which is
not efficient. Therefore, a new window based updating technique
is implemented to better classify the difference in how often each
particular address in the window is referred. The freezing policy
is applied to the address most often referred, which would not be
updated until all the other addresses in the window have the same
priority. This guarantees that the more often referred addresses would
not be updated until their time comes. This updating policy leads
to an improvement on the compression efficiency of the proposed
algorithm while still keep the architecture low complexity and easy
to implement.
Abstract: Opportunistic network is a kind of Delay Tolerant Networks (DTN) where the nodes in this network come into contact with each other opportunistically and communicate wirelessly and, an end-to-end path between source and destination may have never existed, and disconnection and reconnection is common in the network. In such a network, because of the nature of opportunistic network, perhaps there is no a complete path from source to destination for most of the time and even if there is a path; the path can be very unstable and may change or break quickly. Therefore, routing is one of the main challenges in this environment and, in order to make communication possible in an opportunistic network, the intermediate nodes have to play important role in the opportunistic routing protocols. In this paper we proposed an Adaptive Fuzzy Routing in opportunistic network (AFRON). This protocol is using the simple parameters as input parameters to find the path to the destination node. Using Message Transmission Count, Message Size and Time To Live parameters as input fuzzy to increase delivery ratio and decrease the buffer consumption in the all nodes of network.
Abstract: Due to the fact that in the new century customers tend
to express globally increasing demands, networks of interconnected
businesses have been established in societies and the management of
such networks seems to be a major key through gaining competitive
advantages. Supply chain management encompasses such managerial
activities. Within a supply chain, a critical role is played by quality.
QFD is a widely-utilized tool which serves the purpose of not only
bringing quality to the ultimate provision of products or service
packages required by the end customer or the retailer, but it can also
initiate us into a satisfactory relationship with our initial customer;
that is the wholesaler. However, the wholesalers- cooperation is
considerably based on the capabilities that are heavily dependent on
their locations and existing circumstances. Therefore, it is undeniable
that for all companies each wholesaler possesses a specific
importance ratio which can heavily influence the figures calculated in
the House of Quality in QFD. Moreover, due to the competitiveness
of the marketplace today, it-s been widely recognized that
consumers- expression of demands has been highly volatile in
periods of production. Apparently, such instability and proneness to
change has been very tangibly noticed and taking it into account
during the analysis of HOQ is widely influential and doubtlessly
required. For a more reliable outcome in such matters, this article
demonstrates the application viability of Analytic Network Process
for considering the wholesalers- reputation and simultaneously
introduces a mortality coefficient for the reliability and stability of
the consumers- expressed demands in course of time. Following to
this, the paper provides further elaboration on the relevant
contributory factors and approaches through the calculation of such
coefficients. In the end, the article concludes that an empirical
application is needed to achieve broader validity.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: Sparse representation which can represent high dimensional
data effectively has been successfully used in computer vision
and pattern recognition problems. However, it doesn-t consider the
label information of data samples. To overcome this limitation,
we develop a novel dimensionality reduction algorithm namely
dscriminatively regularized sparse subspace learning(DR-SSL) in this
paper. The proposed DR-SSL algorithm can not only make use of
the sparse representation to model the data, but also can effective
employ the label information to guide the procedure of dimensionality
reduction. In addition,the presented algorithm can effectively deal
with the out-of-sample problem.The experiments on gene-expression
data sets show that the proposed algorithm is an effective tool for
dimensionality reduction and gene-expression data classification.
Abstract: The one-class support vector machine “support vector
data description” (SVDD) is an ideal approach for anomaly or outlier
detection. However, for the applicability of SVDD in real-world
applications, the ease of use is crucial. The results of SVDD are
massively determined by the choice of the regularisation parameter C
and the kernel parameter of the widely used RBF kernel. While for
two-class SVMs the parameters can be tuned using cross-validation
based on the confusion matrix, for a one-class SVM this is not
possible, because only true positives and false negatives can occur
during training. This paper proposes an approach to find the optimal
set of parameters for SVDD solely based on a training set from
one class and without any user parameterisation. Results on artificial
and real data sets are presented, underpinning the usefulness of the
approach.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.
Abstract: A novel thermo-sensitive superabsorbent hydrogel
with salt- and pH-responsiveness properties was obtained by grafting
of mixtures of acrylic acid (AA) and N-isopropylacrylamide
(NIPAM) monomers onto kappa-carrageenan, kC, using ammonium
persulfate (APS) as a free radical initiator in the presence of
methylene bisacrylamide (MBA) as a crosslinker. Infrared
spectroscopy was carried out to confirm the chemical structure of the
hydrogel. Moreover, morphology of the samples was examined by
scanning electron microscopy (SEM). The effect of MBA
concentration and AA/NIPAM weight ratio on the water absorbency
capacity has been investigated. The swelling variations of hydrogels
were explained according to swelling theory based on the hydrogel
chemical structure. The hydrogels exhibited salt-sensitivity and
cation exchange properties. The temperature- and pH-reversibility
properties of the hydrogels make the intelligent polymers as good
candidates for considering as potential carriers for bioactive agents,
e.g. drugs.
Abstract: When acid is pumped into damaged reservoirs for
damage removal/stimulation, distorted inflow of acid into the
formation occurs caused by acid preferentially traveling into highly
permeable regions over low permeable regions, or (in general) into
the path of least resistance. This can lead to poor zonal coverage and
hence warrants diversion to carry out an effective placement of acid.
Diversion is desirably a reversible technique of temporarily reducing
the permeability of high perm zones, thereby forcing the acid into
lower perm zones.
The uniqueness of each reservoir can pose several challenges to
engineers attempting to devise optimum and effective diversion
strategies. Diversion techniques include mechanical placement and/or
chemical diversion of treatment fluids, further sub-classified into ball
sealers, bridge plugs, packers, particulate diverters, viscous gels,
crosslinked gels, relative permeability modifiers (RPMs), foams,
and/or the use of placement techniques, such as coiled tubing (CT)
and the maximum pressure difference and injection rate (MAPDIR)
methodology.
It is not always realized that the effectiveness of diverters greatly
depends on reservoir properties, such as formation type, temperature,
reservoir permeability, heterogeneity, and physical well
characteristics (e.g., completion type, well deviation, length of
treatment interval, multiple intervals, etc.). This paper reviews the
mechanisms by which each variety of diverter functions and
discusses the effect of various reservoir properties on the efficiency
of diversion techniques. Guidelines are recommended to help
enhance productivity from zones of interest by choosing the best
methods of diversion while pumping an optimized amount of
treatment fluid. The success of an overall acid treatment often
depends on the effectiveness of the diverting agents.
Abstract: Since primary school trips usually start from home,
attention by many scholars have been focused on the home end for
data gathering. Thereafter category analysis has often been relied
upon when predicting school travel demands. In this paper, school
end was relied on for data gathering and multivariate regression for
future travel demand prediction. 9859 pupils were surveyed by way
of questionnaires at 21 primary schools. The town was divided into 5
zones. The study was carried out in Skudai Town, Malaysia. Based
on the hypothesis that the number of primary school trip ends are
expected to be the same because school trips are fixed, the choice of
trip end would have inconsequential effect on the outcome. The
study compared empirical data for home and school trip end
productions and attractions. Variance from both data results was
insignificant, although some claims from home based family survey
were found to be grossly exaggerated. Data from the school trip ends
was relied on for travel demand prediction because of its
completeness. Accessibility, trip attraction and trip production were
then related to school trip rates under daylight and dry weather
conditions. The paper concluded that, accessibility is an important
parameter when predicting demand for future school trip rates.