Abstract: A new numerical method for solving the twodimensional,
steady, incompressible, viscous flow equations on a
Curvilinear staggered grid is presented in this paper. The proposed
methodology is finite difference based, but essentially takes
advantage of the best features of two well-established numerical
formulations, the finite difference and finite volume methods. Some
weaknesses of the finite difference approach are removed by
exploiting the strengths of the finite volume method. In particular,
the issue of velocity-pressure coupling is dealt with in the proposed
finite difference formulation by developing a pressure correction
equation in a manner similar to the SIMPLE approach commonly
used in finite volume formulations. However, since this is purely a
finite difference formulation, numerical approximation of fluxes is
not required. Results obtained from the present method are based on
the first-order upwind scheme for the convective terms, but the
methodology can easily be modified to accommodate higher order
differencing schemes.
Abstract: This paper focuses on the experimental impacts of
ultrasonic, carbonate and a combination of them on the quality of
fresh kiwi juice. Today, non-thermal methods like ultrasonic, which
have imperceptible effects on some properties of the juice such as
taste, flavor and color, are commonly used for killing
microorganisms.In this paper, some properties of kiwi fruit juice
under ultrasonic, carbonate and a combination of them has been
researched. Those properties include pH, acidity, transparency and
Brix. Its impact on microorganisms has been studied as well.The
results show that using a combination of carbonate and sonicate make
the cavitation more severe without a perceptible effect on nonactivation
of microorganisms.
Abstract: Due to important issues, such as deadlock, starvation,
communication, non-deterministic behavior and synchronization,
concurrent systems are very complex, sensitive, and error-prone.
Thus ensuring reliability and accuracy of these systems is very
essential. Therefore, there has been a big interest in the formal
specification of concurrent programs in recent years. Nevertheless,
some features of concurrent systems, such as dynamic process
creation, scheduling and starvation have not been specified formally
yet. Also, some other features have been specified partially and/or
have been described using a combination of several different
formalisms and methods whose integration needs too much effort. In
other words, a comprehensive and integrated specification that could
cover all aspects of concurrent systems has not been provided yet.
Thus, this paper makes two major contributions: firstly, it provides a
comprehensive formal framework to specify all well-known features
of concurrent systems. Secondly, it provides an integrated
specification of these features by using just a single formal notation,
i.e., the Z language.
Abstract: Surface roughness (Ra) is one of the most important requirements in machining process. In order to obtain better surface roughness, the proper setting of cutting parameters is crucial before the process take place. This research presents the development of mathematical model for surface roughness prediction before milling process in order to evaluate the fitness of machining parameters; spindle speed, feed rate and depth of cut. 84 samples were run in this study by using FANUC CNC Milling α-Τ14ιE. Those samples were randomly divided into two data sets- the training sets (m=60) and testing sets(m=24). ANOVA analysis showed that at least one of the population regression coefficients was not zero. Multiple Regression Method was used to determine the correlation between a criterion variable and a combination of predictor variables. It was established that the surface roughness is most influenced by the feed rate. By using Multiple Regression Method equation, the average percentage deviation of the testing set was 9.8% and 9.7% for training data set. This showed that the statistical model could predict the surface roughness with about 90.2% accuracy of the testing data set and 90.3% accuracy of the training data set.
Abstract: As the air traffic increases at a hub airport, some
flights cannot land or depart at their preferred target time. This event
happens because the airport runways become occupied to near their
capacity. It results in extra costs for both passengers and airlines
because of the loss of connecting flights or more waiting, more fuel
consumption, rescheduling crew members, etc. Hence, devising an
appropriate scheduling method that determines a suitable runway and
time for each flight in order to efficiently use the hub capacity and
minimize the related costs is of great importance. In this paper, we
present a mixed-integer zero-one model for scheduling a set of mixed
landing and departing flights (despite of most previous studies
considered only landings). According to the fact that the flight cost is
strongly affected by the level of airline, we consider different airline
categories in our model. This model presents a single objective
minimizing the total sum of three terms, namely 1) the weighted
deviation from targets, 2) the scheduled time of the last flight (i.e.,
makespan), and 3) the unbalancing the workload on runways. We
solve 10 simulated instances of different sizes up to 30 flights and 4
runways. Optimal solutions are obtained in a reasonable time, which
are satisfactory in comparison with the traditional rule, namely First-
Come-First-Serve (FCFS) that is far apart from optimality in most
cases.
Abstract: This paper presents the methodology from machine
learning approaches for short-term rain forecasting system. Decision
Tree, Artificial Neural Network (ANN), and Support Vector Machine
(SVM) were applied to develop classification and prediction models
for rainfall forecasts. The goals of this presentation are to
demonstrate (1) how feature selection can be used to identify the
relationships between rainfall occurrences and other weather
conditions and (2) what models can be developed and deployed for
predicting the accurate rainfall estimates to support the decisions to
launch the cloud seeding operations in the northeastern part of
Thailand. Datasets collected during 2004-2006 from the
Chalermprakiat Royal Rain Making Research Center at Hua Hin,
Prachuap Khiri khan, the Chalermprakiat Royal Rain Making
Research Center at Pimai, Nakhon Ratchasima and Thai
Meteorological Department (TMD). A total of 179 records with 57
features was merged and matched by unique date. There are three
main parts in this work. Firstly, a decision tree induction algorithm
(C4.5) was used to classify the rain status into either rain or no-rain.
The overall accuracy of classification tree achieves 94.41% with the
five-fold cross validation. The C4.5 algorithm was also used to
classify the rain amount into three classes as no-rain (0-0.1 mm.),
few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall
accuracy of classification tree achieves 62.57%. Secondly, an ANN
was applied to predict the rainfall amount and the root mean square
error (RMSE) were used to measure the training and testing errors of
the ANN. It is found that the ANN yields a lower RMSE at 0.171 for
daily rainfall estimates, when compared to next-day and next-2-day
estimation. Thirdly, the ANN and SVM techniques were also used to
classify the rain amount into three classes as no-rain, few-rain, and
moderate-rain as above. The results achieved in 68.15% and 69.10%
of overall accuracy of same-day prediction for the ANN and SVM
models, respectively. The obtained results illustrated the comparison
of the predictive power of different methods for rainfall estimation.
Abstract: Renewable and non-renewable resource constraints have been vast studied in theoretical fields of project scheduling problems. However, although cumulative resources are widespread in practical cases, the literature on project scheduling problems subject to these resources is scant. So in order to study this type of resources more, in this paper we use the framework of a resource constrained project scheduling problem (RCPSP) with finish-start precedence relations between activities and subject to the cumulative resources in addition to the renewable resources. We develop a branch and bound algorithm for this problem customizing precedence tree algorithm of RCPSP. We perform extensive experimental analysis on the algorithm to check its effectiveness and performance for solving different instances of the problem in question.
Abstract: Methanol-to-olefins coupled with transformation of
coal or natural gas to methanol gives an interesting and promising way
to produce ethylene and propylene. To investigate solid concentration
in gas-solid fluidized bed for methanol-to-olefins process catalyzed by
SAPO-34, a cold model experiment system is established in this paper.
The system comprises a gas distributor in a 300mm internal diameter
and 5000mm height acrylic column, the fiber optic probe system and
series of cyclones. The experiments are carried out at ambient
conditions and under different superficial gas velocity ranging from
0.3930m/s to 0.7860m/s and different initial bed height ranging from
600mm to 1200mm. The effects of radial distance, axial distance,
superficial gas velocity, initial bed height on solid concentration in the
bed are discussed. The effects of distributor shape and porosity on
solid concentration are also discussed. The time-averaged solid
concentration profiles under different conditions are obtained.
Abstract: Scheduling algorithms are used in operating systems
to optimize the usage of processors. One of the most efficient
algorithms for scheduling is Multi-Layer Feedback Queue (MLFQ)
algorithm which uses several queues with different quanta. The most
important weakness of this method is the inability to define the
optimized the number of the queues and quantum of each queue. This
weakness has been improved in IMLFQ scheduling algorithm.
Number of the queues and quantum of each queue affect the response
time directly. In this paper, we review the IMLFQ algorithm for
solving these problems and minimizing the response time. In this
algorithm Recurrent Neural Network has been utilized to find both
the number of queues and the optimized quantum of each queue.
Also in order to prevent any probable faults in processes' response
time computation, a new fault tolerant approach has been presented.
In this approach we use combinational software redundancy to
prevent the any probable faults. The experimental results show that
using the IMLFQ algorithm results in better response time in
comparison with other scheduling algorithms also by using fault
tolerant mechanism we improve IMLFQ performance.
Abstract: This paper presents a novel template-based method to
detect objects of interest from real images by shape matching. To
locate a target object that has a similar shape to a given template
boundary, the proposed method integrates three components: contour
grouping, partial shape matching, and boundary verification. In the
first component, low-level image features, including edges and
corners, are grouped into a set of perceptually salient closed contours
using an extended ratio-contour algorithm. In the second component,
we develop a partial shape matching algorithm to identify the
fractions of detected contours that partly match given template
boundaries. Specifically, we represent template boundaries and
detected contours using landmarks, and apply a greedy algorithm to
search the matched landmark subsequences. For each matched
fraction between a template and a detected contour, we estimate an
affine transform that transforms the whole template into a hypothetic
boundary. In the third component, we provide an efficient algorithm
based on oriented edge lists to determine the target boundary from
the hypothetic boundaries by checking each of them against image
edges. We evaluate the proposed method on recognizing and
localizing 12 template leaves in a data set of real images with clutter
back-grounds, illumination variations, occlusions, and image noises.
The experiments demonstrate the high performance of our proposed
method1.
Abstract: Stochastic models of biological networks are well established in systems biology, where the computational treatment of such models is often focused on the solution of the so-called chemical master equation via stochastic simulation algorithms. In contrast to this, the development of storage-efficient model representations that are directly suitable for computer implementation has received significantly less attention. Instead, a model is usually described in terms of a stochastic process or a "higher-level paradigm" with graphical representation such as e.g. a stochastic Petri net. A serious problem then arises due to the exponential growth of the model-s state space which is in fact a main reason for the popularity of stochastic simulation since simulation suffers less from the state space explosion than non-simulative numerical solution techniques. In this paper we present transition class models for the representation of biological network models, a compact mathematical formalism that circumvents state space explosion. Transition class models can also serve as an interface between different higher level modeling paradigms, stochastic processes and the implementation coded in a programming language. Besides, the compact model representation provides the opportunity to apply non-simulative solution techniques thereby preserving the possible use of stochastic simulation. Illustrative examples of transition class representations are given for an enzyme-catalyzed substrate conversion and a part of the bacteriophage λ lysis/lysogeny pathway.
Abstract: A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Abstract: Road rage is an increasingly prevalent expression of
aggression in our society. Its dangers are apparent and understanding
its causes may shed light on preventative measures. This study
involved a fifteen-minute survey administered to 147 undergraduate
students at a North Eastern suburban university. The survey
consisted of a demographics section, questions regarding financial
investment in respondents- vehicles, experience driving, habits of
driving, experiences witnessing role models driving, and an
evaluation of road rage behavior using the Driving Vengeance
Questionnaire. The study found no significant differences in driving
aggression between respondents who were financially invested in
their vehicle compared to those who were not, or between
respondents who drove in heavy traffic hours compared to those who
did not, suggesting internal factors correlate with aggressive driving
habits. The study also found significant differences in driving
aggression between males versus females, those with more points on
their license versus fewer points, and those who witnessed parents
driving aggressively very often versus rarely or never. Additional
studies can investigate how witnessing parents driving aggressively
is related to future driving behaviors.
Abstract: This paper deals with the synthesis of fuzzy controller
applied to a permanent magnet synchronous machine (PMSM) with a
guaranteed H∞ performance. To design this fuzzy controller,
nonlinear model of the PMSM is approximated by Takagi-Sugeno
fuzzy model (T-S fuzzy model), then the so-called parallel
distributed compensation (PDC) is employed. Next, we derive the
property of the H∞ norm. The latter is cast in terms of linear matrix
inequalities (LMI-s) while minimizing the H∞ norm of the transfer
function between the disturbance and the error ( ) ev T . The
experimental and simulations results were conducted on a permanent
magnet synchronous machine to illustrate the effects of the fuzzy
modelling and the controller design via the PDC.
Abstract: There has been gradual progress of late in construction projects, particularly in big-scale megaprojects. Due to the long-term construction period, however, with large-scale budget investment, lack of construction management technologies, and increase in the incomplete elements of project schedule management, a plan to conduct efficient operations and to ensure business safety is required. In particular, as the project management information system (PMIS) is meant for managing a single project centering on the construction phase, there is a limitation in the management of program-scale businesses like megaprojects. Thus, a program management information system (PgMIS) that includes program-level management technologies is needed to manage multiple projects. In this study, a support tool was developed for managing the cost and schedule information occurring in the construction phase, at the program level. In addition, a case study on the developed support tool was conducted to verify the usability of the system. With the use of the developed support tool program, construction managers can monitor the progress of the entire project and of the individual subprojects in real time.
Abstract: Many supervised induction algorithms require discrete
data, even while real data often comes in a discrete
and continuous formats. Quality discretization of continuous
attributes is an important problem that has effects on speed,
accuracy and understandability of the induction models. Usually,
discretization and other types of statistical processes are applied
to subsets of the population as the entire population is practically
inaccessible. For this reason we argue that the discretization
performed on a sample of the population is only an estimate of
the entire population. Most of the existing discretization methods,
partition the attribute range into two or several intervals using
a single or a set of cut points. In this paper, we introduce a
technique by using resampling (such as bootstrap) to generate
a set of candidate discretization points and thus, improving the
discretization quality by providing a better estimation towards
the entire population. Thus, the goal of this paper is to observe
whether the resampling technique can lead to better discretization
points, which opens up a new paradigm to construction of
soft decision trees.
Abstract: The medical studies often require different methods
for parameters selection, as a second step of processing, after the
database-s designing and filling with information. One common
task is the selection of fields that act as risk factors using wellknown
methods, in order to find the most relevant risk factors and
to establish a possible hierarchy between them. Different methods
are available in this purpose, one of the most known being the
binary logistic regression. We will present the mathematical
principles of this method and a practical example of using it in the
analysis of the influence of 10 different psychiatric diagnostics
over 4 different types of offences (in a database made from 289
psychiatric patients involved in different types of offences).
Finally, we will make some observations about the relation
between the risk factors hierarchy established through binary
logistic regression and the individual risks, as well as the results of
Chi-squared test. We will show that the hierarchy built using the
binary logistic regression doesn-t agree with the direct order of risk
factors, even if it was naturally to assume this hypothesis as being
always true.
Abstract: The study was carried out to gather and identify
medicinal plants their curative effects and the part of them which is
used from the reservation area of Miankaleh. The region under study
has an area of 68800 hectares situated 12 kilometers north of the city
of Behshahr and northwest of the city of Gorgan. Results obtained
showed that out of a total of 43 families, 125 genera, and 155 species
found in the region, 33 families, 52 genera and 61 species (39% of all
the species) belonged to medicinal plants, among which the class
Asteraceae with 6 species and the class Chenopodiaceae with 5
species had the most medicinal species. The most used parts of the
plants were the leaves with 31%, the whole plants with 19%, and the
roots with 15%.
Abstract: In this work, an organic compound 5,10,15,20-
Tetrakis(3,5-di-tertbutylphenyl)porphyrinatocopper(II) (TDTBPPCu)
is studied as an active material for thin film electronic devices. To
investigate the electrical properties of TDTBPPCu, junction of
TDTBPPCu with heavily doped n-Si and Al is fabricated.
TDTBPPCu film was sandwiched between Al and n-Si electrodes.
Various electrical parameters of TDTBPPCu are determined. The
current-voltage characteristics of the junction are nonlinear,
asymmetric and show rectification behavior, which gives the clue of
formation of depletion region. This behavior indicates the potential
of TDTBPPCu for electronics applications. The current-voltage and
capacitance-voltage techniques are used to find the different
electronic parameters.
Abstract: The market transformation in Kazakhstan during the
last two decades has essentially strengthened a gap between
development of urban and rural areas. Implementation of market
institutes, transition from public financing to paid rendering of social
services, change of forms of financing of social and economic
infrastructure have led to strengthening of an economic inequality of
social groups, including growth of stratification of the city and the
village. Sociological survey of urban and rural households in Almaty
city and villages of Almaty region has been carried out within the
international research project “Livelihoods Strategies of Private
Households in Central Asia: A Rural–Urban Comparison in
Kazakhstan and Kyrgyzstan" (Germany, Kazakhstan, Kyrgyzstan).
The analysis of statistical data and results of sociological research of
urban and rural households allows us to reveal issues of territorial
development, to investigate an availability of medical, educational
and other services in the city and the village, to reveal an evaluation
urban and rural dwellers of living conditions, to compare economic
strategies of households in the city and the village.