Abstract: The study was a case study analysis about Thai Asia
Pacific Brewery Company. The purpose was to analyze the
company’s marketing objective, marketing strategy at company level,
and marketing mix before liquor liberalization in 2000. Methods used
in this study were qualitative and descriptive research approach
which demonstrated the following results of the study demonstrated
as follows: (1) Marketing objective was to increase market share of
Heineken and Amtel, (2) the company’s marketing strategies were
brand building strategy and distribution strategy. Additionally, the
company also conducted marketing mix strategy as follows. Product
strategy: The company added more beer brands namely Amstel and
Tiger to provide additional choice to consumers, product and
marketing research, and product development. Price strategy: the
company had taken the following into consideration: cost,
competitor, market, economic situation and tax. Promotion strategy:
the company conducted sales promotion and advertising. Distribution
strategy: the company extended channels its channels of distribution
into food shops, pubs and various entertainment places. This strategy
benefited interested persons and people who were engaged in the beer
business.
Abstract: This paper presents a method to support dynamic
packing in cases when no collision-free path can be found. The
method, which is primarily based on path planning and shrinking of
geometries, suggests a minimal geometry design change that results
in a collision-free assembly path. A supplementing approach to
optimize geometry design change with respect to redesign cost is
described. Supporting this dynamic packing method, a new method
to shrink geometry based on vertex translation, interweaved with
retriangulation, is suggested. The shrinking method requires neither
tetrahedralization nor calculation of medial axis and it preserves the
topology of the geometry, i.e. holes are neither lost nor introduced.
The proposed methods are successfully applied on industrial
geometries.
Abstract: This research presented in this paper is an on-going
project of an application of neural network and fuzzy models to
evaluate the sociological factors which affect the educational
performance of the students in Sri Lanka. One of its major goals is to
prepare the grounds to device a counseling tool which helps these
students for a better performance at their examinations, especially at
their G.C.E O/L (General Certificate of Education-Ordinary Level)
examination. Closely related sociological factors are collected as raw
data and the noise of these data are filtered through the fuzzy
interface and the supervised neural network is being utilized to
recognize the performance patterns against the chosen social factors.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: Batch adsorption of recalcitrant melanoidin using the abundantly available coal fly ash was carried out. It had low specific surface area (SBET) of 1.7287 m2/g and pore volume of 0.002245 cm3/g while qualitative evaluation of the predominant phases in it was done by XRD analysis. Colour removal efficiency was found to be dependent on various factors studied. Maximum colour removal was achieved around pH 6, whereas increasing sorbent mass from 10g/L to 200 g/L enhanced colour reduction from 25% to 86% at 298 K. Spontaneity of the process was suggested by negative Gibbs free energy while positive values for enthalpy change showed endothermic nature of the process. Non-linear optimization of error functions resulted in Freundlich and Redlich-Peterson isotherms describing sorption equilibrium data best. The coal fly ash had maximum sorption capacity of 53 mg/g and could thus be used as a low cost adsorbent in melanoidin removal.
Abstract: This paper presents Genetic Algorithm (GA) based
approach for the allocation of FACTS (Flexible AC Transmission
System) devices for the improvement of Power transfer capacity in an
interconnected Power System. The GA based approach is applied on
IEEE 30 BUS System. The system is reactively loaded starting from
base to 200% of base load. FACTS devices are installed in the
different locations of the power system and system performance is
noticed with and without FACTS devices. First, the locations, where
the FACTS devices to be placed is determined by calculating active
and reactive power flows in the lines. Genetic Algorithm is then
applied to find the amount of magnitudes of the FACTS devices. This
approach of GA based placement of FACTS devices is tremendous
beneficial both in terms of performance and economy is clearly
observed from the result obtained.
Abstract: An implant elicits a biological response in the
surrounding tissue which determines the acceptance and long-term
function of the implant. Dental implants have become one of the
main therapy methods in clinic after teeth lose. A successful implant
is in contact with bone and soft tissue represent by fibroblasts. In our
study we focused on the interaction between six different chemically
and physically modified titanium implants (Tis-MALP, Tis-O, Tis-
OA, Tis-OPAAE, Tis-OZ, Tis-OPAE) with alveolar fibroblasts as
well as with five type of microorganisms (S. epidermis, S.mutans, S.
gordonii, S. intermedius, C.albicans). The analysis of microorganism
adhesion was determined by CFU (colony forming unite) and biofilm
formation. The presence of α3β1 and vinculin expression on alveolar
fibroblasts was demonstrated using phospho specific cell based
ELISA (PACE). Alveolar fibroblasts have the highest expression of
these proteins on Tis-OPAAE and Tis-OPAE. It corresponds with
results from bacterial adhesion and biofilm formation and it was
related to the lowest production of collagen I by alveolar fibroblasts
on Tis-OPAAE titanium disc.
Abstract: The Aggregate Production Plan (APP) is a schedule of
the organization-s overall operations over a planning horizon to
satisfy demand while minimizing costs. It is the baseline for any
further planning and formulating the master production scheduling,
resources, capacity and raw material planning. This paper presents a
methodology to model the Aggregate Production Planning problem,
which is combinatorial in nature, when optimized with Genetic
Algorithms. This is done considering a multitude of constraints of
contradictory nature and the optimization criterion – overall cost,
made up of costs with production, work force, inventory, and
subcontracting. A case study of substantial size, used to develop the
model, is presented, along with the genetic operators.
Abstract: This article considers with the influence of selected economic indicators for the development of the Zlin region. Development of the region is mainly influenced by business entities which are located in the region, as well as investors who contribute to the development of regions. For the development of the region it is necessary for skilled workers remain in the region and not to leave these skilled workers. The above-mentioned and other factors are affecting the development of each region.
Abstract: The paper investigates the potential of support vector
machines and Gaussian process based regression approaches to
model the oxygen–transfer capacity from experimental data of
multiple plunging jets oxygenation systems. The results suggest the
utility of both the modeling techniques in the prediction of the
overall volumetric oxygen transfer coefficient (KLa) from operational
parameters of multiple plunging jets oxygenation system. The
correlation coefficient root mean square error and coefficient of
determination values of 0.971, 0.002 and 0.945 respectively were
achieved by support vector machine in comparison to values of
0.960, 0.002 and 0.920 respectively achieved by Gaussian process
regression. Further, the performances of both these regression
approaches in predicting the overall volumetric oxygen transfer
coefficient was compared with the empirical relationship for multiple
plunging jets. A comparison of results suggests that support vector
machines approach works well in comparison to both empirical
relationship and Gaussian process approaches, and could successfully
be employed in modeling oxygen-transfer.
Abstract: In this paper , by using fixed point theorem , upper and lower solution-s method and monotone iterative technique , we prove the existence of maximum and minimum solutions of differential equations with delay , which improved and generalize the result of related paper.
Abstract: In this paper, the effect of width and height of the
model on the earthquake response in the finite element method is
discussed. For this purpose an earth dam as a soil structure under
earthquake has been considered. Various dam-foundation models are
analyzed by Plaxis, a finite element package for solving geotechnical
problems. The results indicate considerable differences in the seismic
responses.
Abstract: In comparison to the original SVM, which involves a
quadratic programming task; LS–SVM simplifies the required
computation, but unfortunately the sparseness of standard SVM is
lost. Another problem is that LS-SVM is only optimal if the training
samples are corrupted by Gaussian noise. In Least Squares SVM
(LS–SVM), the nonlinear solution is obtained, by first mapping the
input vector to a high dimensional kernel space in a nonlinear
fashion, where the solution is calculated from a linear equation set. In
this paper a geometric view of the kernel space is introduced, which
enables us to develop a new formulation to achieve a sparse and
robust estimate.
Abstract: The utilize of renewable energy sources becomes
more crucial and fascinatingly, wider application of renewable
energy devices at domestic, commercial and industrial levels is not
only affect to stronger awareness but also significantly installed
capacities. Moreover, biomass principally is in form of woods and
converts to be energy for using by humans for a long time.
Gasification is a process of conversion of solid carbonaceous fuel
into combustible gas by partial combustion. Many gasified models
have various operating conditions because the parameters kept in
each model are differentiated. This study applied the experimental
data including three inputs variables including biomass consumption;
temperature at combustion zone and ash discharge rate and gas flow
rate as only one output variable. In this paper, response surface
methods were applied for identification of the gasified system
equation suitable for experimental data. The result showed that linear
model gave superlative results.
Abstract: This paper proposes a new solution to string matching problem. This solution constructs an inverted list representing a string pattern to be searched for. It then uses a new algorithm to process an input string in a single pass. The preprocessing phase takes 1) time complexity O(m) 2) space complexity O(1) where m is the length of pattern. The searching phase time complexity takes 1) O(m+α ) in average case 2) O(n/m) in the best case and 3) O(n) in the worst case, where α is the number of comparing leading to mismatch and n is the length of input text.
Abstract: In this paper, we present C@sa, a multiagent system aiming at modeling, controlling and simulating the behavior of an intelligent house. The developed system aims at providing to architects, designers and psychologists a simulation and control tool for understanding which is the impact of embedded and pervasive technology on people daily life. In this vision, the house is seen as an environment made up of independent and distributed devices, controlled by agents, interacting to support user's goals and tasks.
Abstract: Sparse representation which can represent high dimensional
data effectively has been successfully used in computer vision
and pattern recognition problems. However, it doesn-t consider the
label information of data samples. To overcome this limitation,
we develop a novel dimensionality reduction algorithm namely
dscriminatively regularized sparse subspace learning(DR-SSL) in this
paper. The proposed DR-SSL algorithm can not only make use of
the sparse representation to model the data, but also can effective
employ the label information to guide the procedure of dimensionality
reduction. In addition,the presented algorithm can effectively deal
with the out-of-sample problem.The experiments on gene-expression
data sets show that the proposed algorithm is an effective tool for
dimensionality reduction and gene-expression data classification.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: In this paper, an accurate theoretical analysis for the achievable average channel capacity (in the Shannon sense) per user of a hybrid cellular direct-sequence/fast frequency hopping code-division multiple-access (DS/FFH-CDMA) system operating in a Rayleigh fading environment is presented. The analysis covers the downlink operation and leads to the derivation of an exact mathematical expression between the normalized average channel capacity available to each system-s user, under simultaneous optimal power and rate adaptation and the system-s parameters, as the number of hops per bit, the processing gain applied, the number of users per cell and the received signal-tonoise power ratio over the signal bandwidth. Finally, numerical results are presented to illustrate the proposed mathematical analysis.
Abstract: A key requirement for e-learning materials is
reusability and interoperability, that is the possibility to use at least
part of the contents in different courses, and to deliver them trough
different platforms. These features make possible to limit the cost of
new packages, but require the development of material according to
proper specifications. SCORM (Sharable Content Object Reference
Model) is a set of guidelines suitable for this purpose. A specific
adaptation project has been started to make possible to reuse existing
materials. The paper describes the main characteristics of SCORM
specification, and the procedure used to modify the existing material.