Abstract: This work aims to test the application of computational fluid dynamics (CFD) modeling to fixed bed catalytic cracking reactors. Studies of CFD with a fixed bed design commonly use a regular packing with N=2 to define bed geometry. CFD allows us to obtain a more accurate view of the fluid flow and heat transfer mechanisms present in fixed bed equipment. Naphtha was used as feedstock and the reactor length was 80cm. It is divided in three sections that catalyst bed packed in the middle section of the reactor. The reaction scheme was involved one primary reaction and 24 secondary reactions. Because of high CPU times in these simulations, parallel processing have been used. In this study the coke formation process in fixed bed and empty tube reactor was simulated and coke in these reactors are compared. In addition, the effect of steam ratio and feed flow rate on coke formation was investigated.
Abstract: Three-dimensional simulation of harmonic up
generation in free electron laser amplifier operating simultaneously
with a cold and relativistic electron beam is presented in steady-state
regime where the slippage of the electromagnetic wave with respect
to the electron beam is ignored. By using slowly varying envelope
approximation and applying the source-dependent expansion to wave
equations, electromagnetic fields are represented in terms of the
Hermit Gaussian modes which are well suited for the planar wiggler
configuration. The electron dynamics is described by the fully threedimensional
Lorentz force equation in presence of the realistic planar
magnetostatic wiggler and electromagnetic fields. A set of coupled
nonlinear first-order differential equations is derived and solved
numerically. The fundamental and third harmonic radiation of the
beam is considered. In addition to uniform beam, prebunched
electron beam has also been studied. For this effect of sinusoidal
distribution of entry times for the electron beam on the evolution of
radiation is compared with uniform distribution. It is shown that
prebunching reduces the saturation length substantially. For
efficiency enhancement the wiggler is set to decrease linearly when
the radiation of the third harmonic saturates. The optimum starting
point of tapering and the slope of radiation in the amplitude of
wiggler are found by successive run of the code.
Abstract: Most of the image watermarking methods, using the properties of the human visual system (HVS), have been proposed in literature. The component of the visual threshold is usually related to either the spatial contrast sensitivity function (CSF) or the visual masking. Especially on the contrast masking, most methods have not mention to the effect near to the edge region. Since the HVS is sensitive what happens on the edge area. This paper proposes ultrasound image watermarking using the visual threshold corresponding to the HVS in which the coefficients in a DCT-block have been classified based on the texture, edge, and plain area. This classification method enables not only useful for imperceptibility when the watermark is insert into an image but also achievable a robustness of watermark detection. A comparison of the proposed method with other methods has been carried out which shown that the proposed method robusts to blockwise memoryless manipulations, and also robust against noise addition.
Abstract: This study suggests a model of a new set of evaluation criteria that will be used to measure the efficiency of real-world E-commerce websites. Evaluation criteria include design, usability and performance for websites, the Data Envelopment Analysis (DEA) technique has been used to measure the websites efficiency. An efficient Web site is defined as a site that generates the most outputs, using the smallest amount of inputs. Inputs refer to measurements representing the amount of effort required to build, maintain and perform the site. Output is amount of traffic the site generates. These outputs are measured as the average number of daily hits and the average number of daily unique visitors.
Abstract: This paper gives an introduction to Web mining, then
describes Web Structure mining in detail, and explores the data
structure used by the Web. This paper also explores different Page
Rank algorithms and compare those algorithms used for Information
Retrieval. In Web Mining, the basics of Web mining and the Web
mining categories are explained. Different Page Rank based
algorithms like PageRank (PR), WPR (Weighted PageRank), HITS
(Hyperlink-Induced Topic Search), DistanceRank and DirichletRank
algorithms are discussed and compared. PageRanks are calculated for
PageRank and Weighted PageRank algorithms for a given hyperlink
structure. Simulation Program is developed for PageRank algorithm
because PageRank is the only ranking algorithm implemented in the
search engine (Google). The outputs are shown in a table and chart
format.
Abstract: Baseball is unique among other sports in Taiwan.
Baseball has become a “symbol of the Taiwanese spirit and Taiwan-s
national sport". Taiwan-s first professional sports league, the Chinese
Professional Baseball League (CPBL), was established in 1989.
Starters pitch many more innings over the course of a season and for
a century teams have made all their best pitchers starters. In this
study, we attempt to determine the on-field performance these
pitchers and which won the most CPBL games in 2009. We utilize
the discriminate analysis approach to solve the problem, examining
winning pitchers and their statistics, to reliably find the best starting
pitcher. The data employed in this paper include innings pitched (IP),
earned runs allowed (ERA) and walks plus hits per inning pitched
(WPHIP) provided by the official website of the CPBL. The results
show that Aaron Rakers was the best starting pitcher of the CPBL.
The top 10 CPBL starting pitchers won 14 games to 8 games in the
2009 season. Though Fisher Discriminant Analysis, predicted to top
10 CPBL starting pitchers probably won 20 games to 9 games, more
1 game to 7 games in actually counts in 2009 season.
Abstract: The effect of wood vinegar, entomopathogenic
nematodes ((Steinernema thailandensis n. sp.) and fermented organic
substances from four plants such as: Derris elliptica Roxb, Stemona
tuberosa Lour, Tinospora crispa Mier and Azadirachta indica J. were
tested on the five varieties of sweetpotato with potential for
bioethanol production ie. Taiwan, China, PROC No.65-16, Phichit
166-5, and Phichit 129-6. The experimental plots were located at
Faculty of Agriculture, Natural Resources and Environment,
Naresuan University, Phitsanulok, Thailand. The aim of this study
was to compare the efficiency of the five treatments for growth, yield
and insect infestation on the five varieties of sweetpotato. Treatment
with entomopathogenic nematodes gave the highest average weight
of sweetpotato tubers (1.3 kg/tuber), followed by wood vinegar,
fermented organic substances and mixed treatment with yields of
0.88, 0.46 and 0.43 kg/tuber, respectively. Also the
entomopathogenic nematode treatment gave significantly higher
average width and length of sweet potato (9.82 cm and 9.45 cm,
respectively). Additionally, the entomopathogenic nematode
provided the best control of insect infestation on sweetpotato leaves
and tubers. Comparison among the varieties of sweetpotato, PROC
NO.65-16 showed the highest weight and length. However, Phichit
129-6 gave significantly higher weight of 0.94 kg/tuber. Lastly, the
lowest sweet potato weevil infestation on leaves and tubers occurred
on Taiwan and Phichit 129-6.
Abstract: Tofurther advance research on immune-related genes
from T. molitor, we constructed acDNA library and analyzed
expressed sequence taq (EST) sequences from 1,056 clones. After
removing vector sequence and quality checkingthrough thePhred
program (trim_alt 0.05 (P-score>20), 1039 sequences were generated.
The average length of insert was 792 bp. In addition, we identified 162
clusters, 167 contigs and 391 contigs after clustering and assembling
process using a TGICL package. EST sequences were searchedagainst
NCBI nr database by local BLAST (blastx, E
Abstract: There are multiple reasons to expect that detecting the
word order errors in a text will be a difficult problem, and detection
rates reported in the literature are in fact low. Although grammatical
rules constructed by computer linguists improve the performance of
grammar checker in word order diagnosis, the repairing task is still
very difficult. This paper presents an approach for repairing word
order errors in English text by reordering words in a sentence and
choosing the version that maximizes the number of trigram hits
according to a language model. The novelty of this method concerns
the use of an efficient confusion matrix technique for reordering the
words. The comparative advantage of this method is that works with
a large set of words, and avoids the laborious and costly process of
collecting word order errors for creating error patterns.
Abstract: Unstructured peer-to-peer networks are popular due to
its robustness and scalability. Query schemes that are being used in
unstructured peer-to-peer such as the flooding and interest-based
shortcuts suffer various problems such as using large communication
overhead long delay response. The use of routing indices has been a
popular approach for peer-to-peer query routing. It helps the query
routing processes to learn the routing based on the feedbacks
collected. In an unstructured network where there is no global
information available, efficient and low cost routing approach is
needed for routing efficiency.
In this paper, we propose a novel mechanism for query-feedback
oriented routing indices to achieve routing efficiency in unstructured
network at a minimal cost. The approach also applied information
retrieval technique to make sure the content of the query is
understandable and will make the routing process not just based to
the query hits but also related to the query content. Experiments have
shown that the proposed mechanism performs more efficient than
flood-based routing.
Abstract: EGOTHOR is a search engine that indexes the Web
and allows us to search the Web documents. Its hit list contains URL
and title of the hits, and also some snippet which tries to shortly
show a match. The snippet can be almost always assembled by an
algorithm that has a full knowledge of the original document (mostly
HTML page). It implies that the search engine is required to store
the full text of the documents as a part of the index.
Such a requirement leads us to pick up an appropriate compression
algorithm which would reduce the space demand. One of the solutions
could be to use common compression methods, for instance gzip or
bzip2, but it might be preferable if we develop a new method which
would take advantage of the document structure, or rather, the textual
character of the documents.
There already exist a special compression text algorithms and
methods for a compression of XML documents. The aim of this
paper is an integration of the two approaches to achieve an optimal
level of the compression ratio