Abstract: The aim of this study is to test the “work values"
inventory developed by Tevruz and Turgut and to utilize the concept
in a model, which aims to create a greater understanding of the work
experience. In the study multiple effects of work values, work-value
congruence and work centrality on organizational citizenship
behavior are examined. In this respect, it is hypothesized that work
values and work-value congruence predict organizational citizenship
behavior through work centrality. Work-goal congruence test, Tevruz
and Turgut-s work values inventory are administered along with
Kanungo-s work centrality and Podsakoff et al.-s [47] organizational
citizenship behavior test to employees working in Turkish SME-s.
The study validated that Tevruz and Turgut-s work values inventory
and the work-value congruence test were reliable and could be used
for future research. The study revealed the mediating role of work
centrality only for the relationship of work values and the
responsibility dimension of citizenship behavior. Most important, this
study brought in an important concept, work-value congruence,
which enables a better understanding of work values and their
relation to various attitudinal variables.
Abstract: The proliferation of web application and the pervasiveness of mobile technology make web-based attacks even more attractive and even easier to launch. Web Application Firewall (WAF) is an intermediate tool between web server and users that provides comprehensive protection for web application. WAF is a negative security model where the detection and prevention mechanisms are based on predefined or user-defined attack signatures and patterns. However, WAF alone is not adequate to offer best defensive system against web vulnerabilities that are increasing in number and complexity daily. This paper presents a methodology to automatically design a positive security based model which identifies and allows only legitimate web queries. The paper shows a true positive rate of more than 90% can be achieved.
Abstract: In this paper, three types of defected ground structure
(DGS) units which are triangular-head (TH), rectangular-head (RH)
and U-shape (US) are investigated. They are further used to low-pass
and band-pass filters designs (LPF and BPF) and the obtained
performances are examined. The LPF employing RH-DGS geometry
presents the advantages of compact size, low-insertion loss and wide
stopband compared to the other filters. It provides cutoff frequency of
2.5 GHz, largest rejection band width of 20 dB from 2.98 to 8.76
GHz, smallest transition region and smallest sharpness of the cutoff
frequency. The BPF based on RH-DGS has the highest bandwidth
(BW) of about 0.74 GHz and the lowest center frequency of 3.24
GHz, whereas the other BPFs have BWs less than 0.7 GHz.
Abstract: System development life cycle (SDLC) is a
process uses during the development of any system. SDLC
consists of four main phases: analysis, design, implement and
testing. During analysis phase, context diagram and data flow
diagrams are used to produce the process model of a system.
A consistency of the context diagram to lower-level data flow
diagrams is very important in smoothing up developing
process of a system. However, manual consistency check from
context diagram to lower-level data flow diagrams by using a
checklist is time-consuming process. At the same time, the
limitation of human ability to validate the errors is one of the
factors that influence the correctness and balancing of the
diagrams. This paper presents a tool that automates the
consistency check between Data Flow Diagrams (DFDs)
based on the rules of DFDs. The tool serves two purposes: as
an editor to draw the diagrams and as a checker to check the
correctness of the diagrams drawn. The consistency check
from context diagram to lower-level data flow diagrams is
embedded inside the tool to overcome the manual checking
problem.
Abstract: This research explores on the development of the structure of Carbon Credit Registry System those accords to the need of future events in Thailand. This research also explores the big picture of every connected system by referring to the design of each system, the Data Flow Diagram, and the design in term of the system-s data using DES standard. The purpose of this paper is to show how to design the model of each system. Furthermore, this paper can serve as guideline for designing an appropriate Carbon Credit Registry System.
Abstract: Determining depth of anesthesia is a challenging problem
in the context of biomedical signal processing. Various methods
have been suggested to determine a quantitative index as depth of
anesthesia, but most of these methods suffer from high sensitivity
during the surgery. A novel method based on energy scattering of
samples in the wavelet domain is suggested to represent the basic
content of electroencephalogram (EEG) signal. In this method, first
EEG signal is decomposed into different sub-bands, then samples
are squared and energy of samples sequence is constructed through
each scale and time, which is normalized and finally entropy of the
resulted sequences is suggested as a reliable index. Empirical Results
showed that applying the proposed method to the EEG signals can
classify the awake, moderate and deep anesthesia states similar to
BIS.
Abstract: Lighvan cheese is basically made from sheep milk in
the area of Sahand mountainside which is located in the North West
of Iran. The main objective of this study was to investigate the effect
of enterococci isolated from traditional Lighvan cheese on the quality
of Iranian UF white during ripening. The experimental design was
split plot based on randomized complete blocks, main plots were four
types of starters and subplots were different ripening durations.
Addition of Enterococcus spp. did not significantly (P
Abstract: Three novel and significant contributions are made in
this paper Firstly, non-recursive formulation of Haar connection
coefficients, pioneered by the present authors is presented, which
can be computed very efficiently and avoid stack and memory
overflows. Secondly, the generalized approach for state analysis of
singular bilinear time-invariant (TI) and time-varying (TV) systems
is presented; vis-˜a-vis diversified and complex works reported by
different authors. Thirdly, a generalized approach for parameter
estimation of bilinear TI and TV systems is also proposed. The unified
framework of the proposed method is very significant in that the
digital hardware once-designed can be used to perform the complex
tasks of state analysis and parameter estimation of different types
of bilinear systems single-handedly. The simplicity, effectiveness and
generalized nature of the proposed method is established by applying
it to different types of bilinear systems for the two tasks.
Abstract: Freeze concentration freezes or crystallises the water
molecules out as ice crystals and leaves behind a highly concentrated
solution. In conventional suspension freeze concentration where ice
crystals formed as a suspension in the mother liquor, separation of
ice is difficult. The size of the ice crystals is still very limited which
will require usage of scraped surface heat exchangers, which is very
expensive and accounted for approximately 30% of the capital cost.
This research is conducted using a newer method of freeze
concentration, which is progressive freeze concentration. Ice crystals
were formed as a layer on the designed heat exchanger surface. In
this particular research, a helical structured copper crystallisation
chamber was designed and fabricated. The effect of two operating
conditions on the performance of the newly designed crystallisation
chamber was investigated, which are circulation flowrate and coolant
temperature. The performance of the design was evaluated by the
effective partition constant, K, calculated from the volume and
concentration of the solid and liquid phase. The system was also
monitored by a data acquisition tool in order to see the temperature
profile throughout the process. On completing the experimental
work, it was found that higher flowrate resulted in a lower K, which
translated into high efficiency. The efficiency is the highest at 1000
ml/min. It was also found that the process gives the highest
efficiency at a coolant temperature of -6 °C.
Abstract: This work is a proposed model of CMOS for which
the algorithm has been created and then the performance evaluation
of this proposition has been done. In this context, another commonly
used model called ZSTT (Zero Switching Time Transient) model is
chosen to compare all the vital features and the results for the
Proposed Equivalent CMOS are promising. In the end, the excerpts
of the created algorithm are also included
Abstract: Bloom filter is a probabilistic and memory efficient
data structure designed to answer rapidly whether an element is
present in a set. It tells that the element is definitely not in the set but
its presence is with certain probability. The trade-off to use Bloom
filter is a certain configurable risk of false positives. The odds of a
false positive can be made very low if the number of hash function is
sufficiently large. For spam detection, weight is attached to each set
of elements. The spam weight for a word is a measure used to rate the
e-mail. Each word is assigned to a Bloom filter based on its weight.
The proposed work introduces an enhanced concept in Bloom filter
called Bin Bloom Filter (BBF). The performance of BBF over
conventional Bloom filter is evaluated under various optimization
techniques. Real time data set and synthetic data sets are used for
experimental analysis and the results are demonstrated for bin sizes 4,
5, 6 and 7. Finally analyzing the results, it is found that the BBF
which uses heuristic techniques performs better than the traditional
Bloom filter in spam detection.
Abstract: Missing data yields many analysis challenges. In case of complex survey design, in addition to dealing with missing data, researchers need to account for the sampling design to achieve useful inferences. Methods for incorporating sampling weights in neural network imputation were investigated to account for complex survey designs. An estimate of variance to account for the imputation uncertainty as well as the sampling design using neural networks will be provided. A simulation study was conducted to compare estimation results based on complete case analysis, multiple imputation using a Markov Chain Monte Carlo, and neural network imputation. Furthermore, a public-use dataset was used as an example to illustrate neural networks imputation under a complex survey design
Abstract: Impurity metals such as manganese and cadmium
from high-tenor cobalt electrolyte solution were selectively removed
by solvent extraction method using Co-D2EHPA after converting the functional group of D2EHPA with Co2+ ions. The process parameters
such as pH, organic concentration, O/A ratio, kinetics etc. were
investigated and the experiments were conducted by batch tests in the laboratory bench scale. Results showed that a significant amount
of manganese and cadmium can be extracted using Co-D2EHPA for the optimum processing of cobalt electrolyte solution at equilibrium pH about 3.5. The McCabe-Thiele diagram, constructed from the
extraction studies showed that 100% impurities can be extracted through four stages for manganese and three stages for cadmium
using O/A ratio of 0.65 and 1.0, respectively. From the stripping study, it was found that 100% manganese and cadmium can be stripped from the loaded organic using 0.4 M H2SO4 in a single
contact. The loading capacity of Co-D2EHPA by manganese and cadmium were also investigated with different O/A ratio as well as
with number of stages of contact of aqueous and organic phases. Valuable information was obtained for the designing of an impurities
removal process for the production of pure cobalt with less trouble in the electrowinning circuit.
Abstract: Group contribution methods such as the UNIFAC are
very useful to researchers and engineers involved in synthesis,
feasibility studies, design and optimization of separation processes.
They can be applied successfully to predict phase equilibrium and
excess properties in the development of chemical and separation
processes. The main focus of this work was to investigate the
possibility of absorbing selected volatile organic compounds (VOCs)
into polydimethylsiloxane (PDMS) using three selected UNIFAC
group contribution methods. Absorption followed by subsequent
stripping is the predominant available abatement technology of
VOCs from flue gases prior to their release into the atmosphere. The
original, modified and effective UNIFAC models were used in this
work. The thirteen selected VOCs that have been considered in this
research are: pentane, hexane, heptanes, trimethylamine, toluene,
xylene, cyclohexane, butyl acetate, diethyl acetate, chloroform,
acetone, ethyl methyl ketone and isobutyl methyl ketone. The
computation was done for solute VOC concentration of 8.55x10-8
which is well in the infinite dilution region. The results obtained in
this study compare very well with those published in literature
obtained through both measurements and predictions. The phase
equilibrium obtained in this study show that PDMS is a good
absorbent for the removal of VOCs from contaminated air streams
through physical absorption.
Abstract: We present a BeeBot, Binus Multi-client Intelligent Telepresence Robot, a custom-build robot system specifically designed for teleconference with multiple person using omni directional actuator. The robot is controlled using a computer networks, so the manager/supervisor can direct the robot to the intended person to start a discussion/inspection. People tracking and autonomous navigation are intelligent features of this robot. We build a web application for controlling the multi-client telepresence robot and open-source teleconference system used. Experimental result presented and we evaluated its performance.
Abstract: In this paper, we combine a probabilistic neural method with radial-bias functions in order to construct the lithofacies of the wells DF01, DF02 and DF03 situated in the Triassic province of Algeria (Sahara). Lithofacies is a crucial problem in reservoir characterization. Our objective is to facilitate the experts' work in geological domain and to allow them to obtain quickly the structure and the nature of lands around the drilling. This study intends to design a tool that helps automatic deduction from numerical data. We used a probabilistic formalism to enhance the classification process initiated by a Self-Organized Map procedure. Our system gives lithofacies, from well-log data, of the concerned reservoir wells in an aspect easy to read by a geology expert who identifies the potential for oil production at a given source and so forms the basis for estimating the financial returns and economic benefits.
Abstract: The “conveyor belt" as a product represents a
complex high performance component with a wide range of different
applications. Further development of these highly complex
components demands an integration of new technologies and new
enhanced materials. In this context nanostructured fillers appear to
have a more promising effect on the performance of the conveyor
belt composite than conventional micro-scaled fillers.
Within the project “DotTrans" nanostructured fillers, for example
silicon dioxide, are used to optimize performance parameters of
conveyor belt systems. The objective of the project includes
operating parameters like energy consumption or friction
characteristics as well as adaptive parameters like cut or wear
resistance.
Abstract: It has been defined that the “network is the system".
This implies providing levels of service, reliability, predictability and
availability that are commensurate with or better than those that
individual computers provide today. To provide this requires
integrated network management for interconnected networks of
heterogeneous devices covering both the local campus. In this paper
we are addressing a framework to effectively deal with this issue. It
consists of components and interactions between them which are
required to perform the service fault management. A real-world
scenario is used to derive the requirements which have been applied
to the component identification. An analysis of existing frameworks
and approaches with respect to their applicability to the framework is
also carried out.
Abstract: Face and facial expressions play essential roles in
interpersonal communication. Most of the current works on the facial
expression recognition attempt to recognize a small set of the
prototypic expressions such as happy, surprise, anger, sad, disgust
and fear. However the most of the human emotions are
communicated by changes in one or two of discrete features. In this
paper, we develop a facial expressions synthesis system, based on the
facial characteristic points (FCP's) tracking in the frontal image
sequences. Selected FCP's are automatically tracked using a crosscorrelation
based optical flow. The proposed synthesis system uses a
simple deformable facial features model with a few set of control
points that can be tracked in original facial image sequences.
Abstract: In IETF RFC 2002, Mobile-IP was developed to
enable Laptobs to maintain Internet connectivity while moving
between subnets. However, the packet loss that comes from
switching subnets arises because network connectivity is lost while
the mobile host registers with the foreign agent and this encounters
large end-to-end packet delays. The criterion to initiate a simple and
fast full-duplex connection between the home agent and foreign
agent, to reduce the roaming duration, is a very important issue to be
considered by a work in this paper. State-transition Petri-Nets of the
modeling scenario-based CIA: communication inter-agents procedure
as an extension to the basic Mobile-IP registration process was
designed and manipulated to describe the system in discrete events.
The heuristic of configuration file during practical Setup session for
registration parameters, on Cisco platform Router-1760 using IOS
12.3 (15)T and TFTP server S/W is created. Finally, stand-alone
performance simulations from Simulink Matlab, within each subnet
and also between subnets, are illustrated for reporting better end-toend
packet delays. Results verified the effectiveness of our Mathcad
analytical manipulation and experimental implementation. It showed
lower values of end-to-end packet delay for Mobile-IP using CIA
procedure-based early registration. Furthermore, it reported packets
flow between subnets to improve losses between subnets.