Abstract: Network management techniques have long been of
interest to the networking research community. The queue size plays
a critical role for the network performance. The adequate size of the
queue maintains Quality of Service (QoS) requirements within
limited network capacity for as many users as possible. The
appropriate estimation of the queuing model parameters is crucial for
both initial size estimation and during the process of resource
allocation. The accurate resource allocation model for the
management system increases the network utilization. The present
paper demonstrates the results of empirical observation of memory
allocation for packet-based services.
Abstract: This paper deals with the development of a Jacobean model for a 4-axes indigenously developed scara robot arm in the laboratory. This model is used to study the relation between the velocities and the forces in the robot while it is doing the pick and place operation.
Abstract: In this paper the main objective is to analyze the
quality of service of the bus companies operating in the city of
Campos, located in the state of Rio de Janeiro, Brazil. This analysis,
based on the opinion of the bus customers, will help to determine
their degree of satisfaction with the service provided by the bus
companies. The result of this assessment shows that the bus
customers are displeased with the quality of service supplied by the
bus companies. Therefore, it is necessary to identify alternative
solutions to minimize the consequences of the main problems related
to customers- dissatisfaction identified in our evaluation and to help
the bus companies operating in Campos better fulfill their riders-
needs.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: A spectrophotometric method was developed for simultaneous quantification of pseudoephedrine hydrochloride (PSE) triprolidine hydrochloride (TRI) using second derivative method (zero-crossing technique). The second derivative amplitudes of PSE and TRI were measured at 271 and 321 nm, respectively. The calibration curves were linear in the range of 200 to 1,000 g/ml for PSE and 10 to 50 g/ml for TRI. The method was validated for specificity, accuracy, precision, limit of detection and limit of quantitation. The proposed method was applied to the assaying and dissolution of PSE and TRI in commercial tablets without any chemical separation. The results were compared with those obtained by the official USP31 method and statistical tests showed that there is no significant between the methods at 95% confidence level. The proposed method is simple, rapid and suitable for the routine quality control application. KeywordsTriprolidine, Pseudoephedrine, Derivative spectrophotometry, Dissolution testing.
Abstract: In this research, the diffusion of innovation regarding
smartphone usage is analysed through a consumer behaviour theory.
This research aims to determine whether a pattern surrounding the
diffusion of innovation exists. As a methodology, an empirical study
of the switch from a conventional cell phone to a smartphone was
performed. Specifically, a questionnaire survey was completed by
general consumers, and the situational and behavioural characteristics
of switching from a cell phone to a smartphone were analysed. In
conclusion, we found that the speed of the diffusion of innovation, the
consumer behaviour characteristics, and the utilities of the product
vary according to the stage of the product life cycle.
Abstract: The utilization of cheese whey as a fermentation
substrate to produce bio-ethanol is an effort to supply bio-ethanol
demand as a renewable energy. Like other process systems, modeling
is also required for fermentation process design, optimization and
plant operation. This research aims to study the fermentation process
of cheese whey by applying mathematics and fundamental concept in
chemical engineering, and to investigate the characteristic of the
cheese whey fermentation process. Steady state simulation results for
inlet substrate concentration of 50, 100 and 150 g/l, and various
values of hydraulic retention time, showed that the ethanol
productivity maximum values were 0.1091, 0.3163 and 0.5639 g/l.h
respectively. Those values were achieved at hydraulic retention time
of 20 hours, which was the minimum value used in this modeling.
This showed that operating reactor at low hydraulic retention time
was favorable. Model of bio-ethanol production from cheese whey
will enhance the understanding of what really happen in the
fermentation process.
Abstract: Despite so many years- development, the mainstream of workflow solutions from IT industries has not made ad-hoc workflow-support easy or inexpensive in MIS. Moreover, most of academic approaches tend to make their resulted BPM (Business Process Management) more complex and clumsy since they used to necessitate modeling workflow. To cope well with various ad-hoc or casual requirements on workflows while still keeping things simple and inexpensive, the author puts forth first the TSM design pattern that can provide a flexible workflow control while minimizing demand of predefinitions and modeling workflow, which introduces a generic approach for building BPM in workflow-aware MISs (Management Information Systems) with low development and running expenses.
Abstract: An implant elicits a biological response in the
surrounding tissue which determines the acceptance and long-term
function of the implant. Dental implants have become one of the
main therapy methods in clinic after teeth lose. A successful implant
is in contact with bone and soft tissue represent by fibroblasts. In our
study we focused on the interaction between six different chemically
and physically modified titanium implants (Tis-MALP, Tis-O, Tis-
OA, Tis-OPAAE, Tis-OZ, Tis-OPAE) with alveolar fibroblasts as
well as with five type of microorganisms (S. epidermis, S.mutans, S.
gordonii, S. intermedius, C.albicans). The analysis of microorganism
adhesion was determined by CFU (colony forming unite) and biofilm
formation. The presence of α3β1 and vinculin expression on alveolar
fibroblasts was demonstrated using phospho specific cell based
ELISA (PACE). Alveolar fibroblasts have the highest expression of
these proteins on Tis-OPAAE and Tis-OPAE. It corresponds with
results from bacterial adhesion and biofilm formation and it was
related to the lowest production of collagen I by alveolar fibroblasts
on Tis-OPAAE titanium disc.
Abstract: This paper proposes a hybrid method for eyes localization
in facial images. The novelty is in combining techniques
that utilise colour, edge and illumination cues to improve accuracy.
The method is based on the observation that eye regions have dark
colour, high density of edges and low illumination as compared
to other parts of face. The first step in the method is to extract
connected regions from facial images using colour, edge density and
illumination cues separately. Some of the regions are then removed
by applying rules that are based on the general geometry and shape
of eyes. The remaining connected regions obtained through these
three cues are then combined in a systematic way to enhance the
identification of the candidate regions for the eyes. The geometry
and shape based rules are then applied again to further remove the
false eye regions. The proposed method was tested using images from
the PICS facial images database. The proposed method has 93.7%
and 87% accuracies for initial blobs extraction and final eye detection
respectively.
Abstract: The paper investigates the potential of support vector
machines and Gaussian process based regression approaches to
model the oxygen–transfer capacity from experimental data of
multiple plunging jets oxygenation systems. The results suggest the
utility of both the modeling techniques in the prediction of the
overall volumetric oxygen transfer coefficient (KLa) from operational
parameters of multiple plunging jets oxygenation system. The
correlation coefficient root mean square error and coefficient of
determination values of 0.971, 0.002 and 0.945 respectively were
achieved by support vector machine in comparison to values of
0.960, 0.002 and 0.920 respectively achieved by Gaussian process
regression. Further, the performances of both these regression
approaches in predicting the overall volumetric oxygen transfer
coefficient was compared with the empirical relationship for multiple
plunging jets. A comparison of results suggests that support vector
machines approach works well in comparison to both empirical
relationship and Gaussian process approaches, and could successfully
be employed in modeling oxygen-transfer.
Abstract: The paper gives the pilot results of the project that is
oriented on the use of data mining techniques and knowledge
discoveries from production systems through them. They have been
used in the management of these systems. The simulation models of
manufacturing systems have been developed to obtain the necessary
data about production. The authors have developed the way of
storing data obtained from the simulation models in the data
warehouse. Data mining model has been created by using specific
methods and selected techniques for defined problems of production
system management. The new knowledge has been applied to
production management system. Gained knowledge has been tested
on simulation models of the production system. An important benefit
of the project has been proposal of the new methodology. This
methodology is focused on data mining from the databases that store
operational data about the production process.
Abstract: Recent advances in wireless sensor networks have led
to many routing methods designed for energy-efficiency in wireless
sensor networks. Despite that many routing methods have been
proposed in USN, a single routing method cannot be energy-efficient
if the environment of the ubiquitous sensor network varies. We present
the controlling network access to various hosts and the services they
offer, rather than on securing them one by one with a network security
model. When ubiquitous sensor networks are deployed in hostile
environments, an adversary may compromise some sensor nodes and
use them to inject false sensing reports. False reports can lead to not
only false alarms but also the depletion of limited energy resource in
battery powered networks. The interleaved hop-by-hop authentication
scheme detects such false reports through interleaved authentication.
This paper presents a LMDD (Low energy method for data delivery)
algorithm that provides energy-efficiency by dynamically changing
protocols installed at the sensor nodes. The algorithm changes
protocols based on the output of the fuzzy logic which is the fitness
level of the protocols for the environment.
Abstract: Addition of milli or micro sized particles to the heat
transfer fluid is one of the many techniques employed for improving
heat transfer rate. Though this looks simple, this method has
practical problems such as high pressure loss, clogging and erosion
of the material of construction. These problems can be overcome by
using nanofluids, which is a dispersion of nanosized particles in a
base fluid. Nanoparticles increase the thermal conductivity of the
base fluid manifold which in turn increases the heat transfer rate.
Nanoparticles also increase the viscosity of the basefluid resulting in
higher pressure drop for the nanofluid compared to the base fluid. So
it is imperative that the Reynolds number (Re) and the volume
fraction have to be optimum for better thermal hydraulic
effectiveness. In this work, the heat transfer enhancement using
aluminium oxide nanofluid using low and high volume fraction
nanofluids in turbulent pipe flow with constant wall temperature has
been studied by computational fluid dynamic modeling of the
nanofluid flow adopting the single phase approach. Nanofluid, up till
a volume fraction of 1% is found to be an effective heat transfer
enhancement technique. The Nusselt number (Nu) and friction factor
predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%)
agree very well with the experimental values of Sundar and Sharma
(2010). While, predictions for the high volume fraction nanofluids
(i.e. 1%, 4% and 6%) are found to have reasonable agreement with
both experimental and numerical results available in the literature.
So the computationally inexpensive single phase approach can be
used for heat transfer and pressure drop prediction of new nanofluids.
Abstract: This paper describes the design of a programmable
FSK-modulator based on VCO and its implementation in 0.35m
CMOS process. The circuit is used to transmit digital data at
100Kbps rate in the frequency range of 400-600MHz. The design
and operation of the modulator is discussed briefly. Further the
characteristics of PLL, frequency synthesizer, VCO and the whole
design are elaborated. The variation among the proposed and tested
specifications is presented. Finally, the layout of sub-modules, pin
configurations, final chip and test results are presented.
Abstract: Dynamic Causal Modeling (DCM) functional
Magnetic Resonance Imaging (fMRI) is a promising technique to
study the connectivity among brain regions and effects of stimuli
through modeling neuronal interactions from time-series
neuroimaging. The aim of this study is to study characteristics of a
mirror neuron system (MNS) in elderly group (age: 60-70 years old).
Twenty volunteers were MRI scanned with visual stimuli to study a
functional brain network. DCM was employed to determine the
mechanism of mirror neuron effects. The results revealed major
activated areas including precentral gyrus, inferior parietal lobule,
inferior occipital gyrus, and supplementary motor area. When visual
stimuli were presented, the feed-forward connectivity from visual
area to conjunction area was increased and forwarded to motor area.
Moreover, the connectivity from the conjunction areas to premotor
area was also increased. Such findings can be useful for future
diagnostic process for elderly with diseases such as Parkinson-s and
Alzheimer-s.
Abstract: In comparison to the original SVM, which involves a
quadratic programming task; LS–SVM simplifies the required
computation, but unfortunately the sparseness of standard SVM is
lost. Another problem is that LS-SVM is only optimal if the training
samples are corrupted by Gaussian noise. In Least Squares SVM
(LS–SVM), the nonlinear solution is obtained, by first mapping the
input vector to a high dimensional kernel space in a nonlinear
fashion, where the solution is calculated from a linear equation set. In
this paper a geometric view of the kernel space is introduced, which
enables us to develop a new formulation to achieve a sparse and
robust estimate.
Abstract: This paper considers the problem of finding low cost
chip set for a minimum cost partitioning of a large logic circuits. Chip
sets are selected from a given library. Each chip in the library has a
different price, area, and I/O pin. We propose a low cost chip set
selection algorithm. Inputs to the algorithm are a netlist and a chip
information in the library. Output is a list of chip sets satisfied with
area and maximum partitioning number and it is sorted by cost. The
algorithm finds the sorted list of chip sets from minimum cost to
maximum cost. We used MCNC benchmark circuits for experiments.
The experimental results show that all of chip sets found satisfy the
multiple partitioning constraints.
Abstract: Experiments have been carried out at sub-critical
Reynolds number to investigate free-to-roll motions induced by
forebody and/or wings complex flow on a 30° swept back nonslender
wings-slender body-model for static and dynamic (pitch-up)
cases. For the dynamic (pitch-up) case it has been observed that roll
amplitude decreases and lag increases with increase in pitching
speed. Decrease in roll amplitude with increase in pitch rate is
attributed to low disturbing rolling moment due to weaker interaction
between forebody and wing flow components. Asymmetric forebody
vortices dominate and control the roll motion of the model in
dynamic case when non-dimensional pitch rate ≥ 1x10-2.
Effectiveness of the active control scheme utilizing rotating nose with
artificial tip perturbation is observed to be low in the angle of attack
region where the complex flow over the wings has contributions from
both forebody and wings.
Abstract: Intellectual capital measurement is a central aspect of knowledge management. The measurement and the evaluation of intangible assets play a key role in allowing an effective management of these assets as sources of competitiveness. For these reasons, managers and practitioners need conceptual and analytical tools taking into account the unique characteristics and economic significance of Intellectual Capital. Following this lead, we propose an efficiency and productivity analysis of Intellectual Capital, as a determinant factor of the company competitive advantage. The analysis is carried out by means of Data Envelopment Analysis (DEA) and Malmquist Productivity Index (MPI). These techniques identify Bests Practice companies that have accomplished competitive advantage implementing successful strategies of Intellectual Capital management, and offer to inefficient companies development paths by means of benchmarking. The proposed methodology is employed on the Biotechnology industry in the period 2007-2010.