Abstract: The study was a case study analysis about Thai Asia
Pacific Brewery Company. The purpose was to analyze the
company’s marketing objective, marketing strategy at company level,
and marketing mix before liquor liberalization in 2000. Methods used
in this study were qualitative and descriptive research approach
which demonstrated the following results of the study demonstrated
as follows: (1) Marketing objective was to increase market share of
Heineken and Amtel, (2) the company’s marketing strategies were
brand building strategy and distribution strategy. Additionally, the
company also conducted marketing mix strategy as follows. Product
strategy: The company added more beer brands namely Amstel and
Tiger to provide additional choice to consumers, product and
marketing research, and product development. Price strategy: the
company had taken the following into consideration: cost,
competitor, market, economic situation and tax. Promotion strategy:
the company conducted sales promotion and advertising. Distribution
strategy: the company extended channels its channels of distribution
into food shops, pubs and various entertainment places. This strategy
benefited interested persons and people who were engaged in the beer
business.
Abstract: This paper presents a method to support dynamic
packing in cases when no collision-free path can be found. The
method, which is primarily based on path planning and shrinking of
geometries, suggests a minimal geometry design change that results
in a collision-free assembly path. A supplementing approach to
optimize geometry design change with respect to redesign cost is
described. Supporting this dynamic packing method, a new method
to shrink geometry based on vertex translation, interweaved with
retriangulation, is suggested. The shrinking method requires neither
tetrahedralization nor calculation of medial axis and it preserves the
topology of the geometry, i.e. holes are neither lost nor introduced.
The proposed methods are successfully applied on industrial
geometries.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: A spectrophotometric method was developed for simultaneous quantification of pseudoephedrine hydrochloride (PSE) triprolidine hydrochloride (TRI) using second derivative method (zero-crossing technique). The second derivative amplitudes of PSE and TRI were measured at 271 and 321 nm, respectively. The calibration curves were linear in the range of 200 to 1,000 g/ml for PSE and 10 to 50 g/ml for TRI. The method was validated for specificity, accuracy, precision, limit of detection and limit of quantitation. The proposed method was applied to the assaying and dissolution of PSE and TRI in commercial tablets without any chemical separation. The results were compared with those obtained by the official USP31 method and statistical tests showed that there is no significant between the methods at 95% confidence level. The proposed method is simple, rapid and suitable for the routine quality control application. KeywordsTriprolidine, Pseudoephedrine, Derivative spectrophotometry, Dissolution testing.
Abstract: Explosions may cause intensive damage to buildings
and sometimes lead to total and progressive destruction. Pressures
induced by explosions are one of the most destructive loads a
structure may experience. While designing structures for great
explosions may be expensive and impractical, engineers are looking
for methods for preventing destructions resulted from explosions. A
favorable structural system is a system which does not disrupt totally
due to local explosion, since such structures sustain less loss in
comparison with structural ones which really bear the load and
suddenly disrupt. Designing and establishing vital and necessary
installations in a way that it is resistant against direct hit of bomb and
rocket is not practical, economical, or expedient in many cases,
because the cost of construction and installation with such
specifications is several times more than the total cost of the related
equipment.
Abstract: In this research, the diffusion of innovation regarding
smartphone usage is analysed through a consumer behaviour theory.
This research aims to determine whether a pattern surrounding the
diffusion of innovation exists. As a methodology, an empirical study
of the switch from a conventional cell phone to a smartphone was
performed. Specifically, a questionnaire survey was completed by
general consumers, and the situational and behavioural characteristics
of switching from a cell phone to a smartphone were analysed. In
conclusion, we found that the speed of the diffusion of innovation, the
consumer behaviour characteristics, and the utilities of the product
vary according to the stage of the product life cycle.
Abstract: Discrete choice model is the most used methodology for studying traveler-s mode choice and demand. However, to calibrate the discrete choice model needs to have plenty of questionnaire survey. In this study, an aggregative model is proposed. The historical data of passenger volumes for high speed rail and domestic civil aviation are employed to calibrate and validate the model. In this study, different models are compared so as to propose the best one. From the results, systematic equations forecast better than single equation do. Models with the external variable, which is oil price, are better than models based on closed system assumption.
Abstract: An implant elicits a biological response in the
surrounding tissue which determines the acceptance and long-term
function of the implant. Dental implants have become one of the
main therapy methods in clinic after teeth lose. A successful implant
is in contact with bone and soft tissue represent by fibroblasts. In our
study we focused on the interaction between six different chemically
and physically modified titanium implants (Tis-MALP, Tis-O, Tis-
OA, Tis-OPAAE, Tis-OZ, Tis-OPAE) with alveolar fibroblasts as
well as with five type of microorganisms (S. epidermis, S.mutans, S.
gordonii, S. intermedius, C.albicans). The analysis of microorganism
adhesion was determined by CFU (colony forming unite) and biofilm
formation. The presence of α3β1 and vinculin expression on alveolar
fibroblasts was demonstrated using phospho specific cell based
ELISA (PACE). Alveolar fibroblasts have the highest expression of
these proteins on Tis-OPAAE and Tis-OPAE. It corresponds with
results from bacterial adhesion and biofilm formation and it was
related to the lowest production of collagen I by alveolar fibroblasts
on Tis-OPAAE titanium disc.
Abstract: This paper proposes a hybrid method for eyes localization
in facial images. The novelty is in combining techniques
that utilise colour, edge and illumination cues to improve accuracy.
The method is based on the observation that eye regions have dark
colour, high density of edges and low illumination as compared
to other parts of face. The first step in the method is to extract
connected regions from facial images using colour, edge density and
illumination cues separately. Some of the regions are then removed
by applying rules that are based on the general geometry and shape
of eyes. The remaining connected regions obtained through these
three cues are then combined in a systematic way to enhance the
identification of the candidate regions for the eyes. The geometry
and shape based rules are then applied again to further remove the
false eye regions. The proposed method was tested using images from
the PICS facial images database. The proposed method has 93.7%
and 87% accuracies for initial blobs extraction and final eye detection
respectively.
Abstract: The Aggregate Production Plan (APP) is a schedule of
the organization-s overall operations over a planning horizon to
satisfy demand while minimizing costs. It is the baseline for any
further planning and formulating the master production scheduling,
resources, capacity and raw material planning. This paper presents a
methodology to model the Aggregate Production Planning problem,
which is combinatorial in nature, when optimized with Genetic
Algorithms. This is done considering a multitude of constraints of
contradictory nature and the optimization criterion – overall cost,
made up of costs with production, work force, inventory, and
subcontracting. A case study of substantial size, used to develop the
model, is presented, along with the genetic operators.
Abstract: The paper gives the pilot results of the project that is
oriented on the use of data mining techniques and knowledge
discoveries from production systems through them. They have been
used in the management of these systems. The simulation models of
manufacturing systems have been developed to obtain the necessary
data about production. The authors have developed the way of
storing data obtained from the simulation models in the data
warehouse. Data mining model has been created by using specific
methods and selected techniques for defined problems of production
system management. The new knowledge has been applied to
production management system. Gained knowledge has been tested
on simulation models of the production system. An important benefit
of the project has been proposal of the new methodology. This
methodology is focused on data mining from the databases that store
operational data about the production process.
Abstract: Process capability index Cpk is the most widely
used index in making managerial decisions since it provides bounds
on the process yield for normally distributed processes. However,
existent methods for assessing process performance which
constructed by statistical inference may unfortunately lead to fine
results, because uncertainties exist in most real-world applications.
Thus, this study adopts fuzzy inference to deal with testing of Cpk .
A brief score is obtained for assessing a supplier’s process instead of
a severe evaluation.
Abstract: In this paper , by using fixed point theorem , upper and lower solution-s method and monotone iterative technique , we prove the existence of maximum and minimum solutions of differential equations with delay , which improved and generalize the result of related paper.
Abstract: A mammography image is composed of low contrast area where the breast tissues and the breast abnormalities such as microcalcification can hardly be differentiated by the medical practitioner. This paper presents the application of active contour models (Snakes) for the segmentation of microcalcification in mammography images. Comparison on the microcalcifiation areas segmented by the Balloon Snake, Gradient Vector Flow (GVF) Snake, and Distance Snake is done against the true value of the microcalcification area. The true area value is the average microcalcification area in the original mammography image traced by the expert radiologists. From fifty images tested, the result obtained shows that the accuracy of the Balloon Snake, GVF Snake, and Distance Snake in segmenting boundaries of microcalcification are 96.01%, 95.74%, and 95.70% accuracy respectively. This implies that the Balloon Snake is a better segmentation method to locate the exact boundary of a microcalcification region.
Abstract: Recent advances in wireless sensor networks have led
to many routing methods designed for energy-efficiency in wireless
sensor networks. Despite that many routing methods have been
proposed in USN, a single routing method cannot be energy-efficient
if the environment of the ubiquitous sensor network varies. We present
the controlling network access to various hosts and the services they
offer, rather than on securing them one by one with a network security
model. When ubiquitous sensor networks are deployed in hostile
environments, an adversary may compromise some sensor nodes and
use them to inject false sensing reports. False reports can lead to not
only false alarms but also the depletion of limited energy resource in
battery powered networks. The interleaved hop-by-hop authentication
scheme detects such false reports through interleaved authentication.
This paper presents a LMDD (Low energy method for data delivery)
algorithm that provides energy-efficiency by dynamically changing
protocols installed at the sensor nodes. The algorithm changes
protocols based on the output of the fuzzy logic which is the fitness
level of the protocols for the environment.
Abstract: In this paper, the effect of width and height of the
model on the earthquake response in the finite element method is
discussed. For this purpose an earth dam as a soil structure under
earthquake has been considered. Various dam-foundation models are
analyzed by Plaxis, a finite element package for solving geotechnical
problems. The results indicate considerable differences in the seismic
responses.
Abstract: Sociological models (e.g., social network analysis, small-group dynamic and gang models) have historically been used to predict the behavior of terrorist groups. However, they may not be the most appropriate method for understanding the behavior of terrorist organizations because the models were not initially intended to incorporate violent behavior of its subjects. Rather, models that incorporate life and death competition between subjects, i.e., models utilized by scientists to examine the behavior of wildlife populations, may provide a more accurate analysis. This paper suggests the use of biological models to attain a more robust method for understanding the behavior of terrorist organizations as compared to traditional methods. This study also describes how a biological population model incorporating predator-prey behavior factors can predict terrorist organizational recruitment behavior for the purpose of understanding the factors that govern the growth and decline of terrorist organizations. The Lotka-Volterra, a biological model that is based on a predator-prey relationship, is applied to a highly suggestive case study, that of the Irish Republican Army. This case study illuminates how a biological model can be utilized to understand the actions of a terrorist organization.
Abstract: Addition of milli or micro sized particles to the heat
transfer fluid is one of the many techniques employed for improving
heat transfer rate. Though this looks simple, this method has
practical problems such as high pressure loss, clogging and erosion
of the material of construction. These problems can be overcome by
using nanofluids, which is a dispersion of nanosized particles in a
base fluid. Nanoparticles increase the thermal conductivity of the
base fluid manifold which in turn increases the heat transfer rate.
Nanoparticles also increase the viscosity of the basefluid resulting in
higher pressure drop for the nanofluid compared to the base fluid. So
it is imperative that the Reynolds number (Re) and the volume
fraction have to be optimum for better thermal hydraulic
effectiveness. In this work, the heat transfer enhancement using
aluminium oxide nanofluid using low and high volume fraction
nanofluids in turbulent pipe flow with constant wall temperature has
been studied by computational fluid dynamic modeling of the
nanofluid flow adopting the single phase approach. Nanofluid, up till
a volume fraction of 1% is found to be an effective heat transfer
enhancement technique. The Nusselt number (Nu) and friction factor
predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%)
agree very well with the experimental values of Sundar and Sharma
(2010). While, predictions for the high volume fraction nanofluids
(i.e. 1%, 4% and 6%) are found to have reasonable agreement with
both experimental and numerical results available in the literature.
So the computationally inexpensive single phase approach can be
used for heat transfer and pressure drop prediction of new nanofluids.
Abstract: The utilize of renewable energy sources becomes
more crucial and fascinatingly, wider application of renewable
energy devices at domestic, commercial and industrial levels is not
only affect to stronger awareness but also significantly installed
capacities. Moreover, biomass principally is in form of woods and
converts to be energy for using by humans for a long time.
Gasification is a process of conversion of solid carbonaceous fuel
into combustible gas by partial combustion. Many gasified models
have various operating conditions because the parameters kept in
each model are differentiated. This study applied the experimental
data including three inputs variables including biomass consumption;
temperature at combustion zone and ash discharge rate and gas flow
rate as only one output variable. In this paper, response surface
methods were applied for identification of the gasified system
equation suitable for experimental data. The result showed that linear
model gave superlative results.
Abstract: CScheme, a concurrent programming paradigm based
on scheme concept enables concurrency schemes to be constructed
from smaller synchronization units through a GUI based composer
and latter be reused on other concurrency problems of a similar
nature. This paradigm is particularly important in the multi-core
environment prevalent nowadays. In this paper, we demonstrate
techniques to separate concurrency from functional code using the
CScheme paradigm. Then we illustrate how the CScheme
methodology can be used to solve some of the traditional
concurrency problems – critical section problem, and readers-writers
problem - using synchronization schemes such as Single Threaded
Execution Scheme, and Readers Writers Scheme.