Abstract: In this study is presented a general methodology to
predict the performance of a continuous near-critical fluid extraction
process to remove compounds from aqueous solutions using hollow
fiber membrane contactors. A comprehensive 2D mathematical
model was developed to study Porocritical extraction process. The
system studied in this work is a membrane based extractor of ethanol
and acetone from aqueous solutions using near-critical CO2.
Predictions of extraction percentages obtained by simulations have
been compared to the experimental values reported by Bothun et al.
[5]. Simulations of extraction percentage of ethanol and acetone
show an average difference of 9.3% and 6.5% with the experimental
data, respectively. More accurate predictions of the extraction of
acetone could be explained by a better estimation of the transport
properties in the aqueous phase that controls the extraction of this
solute.
Abstract: The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.
Abstract: Distillation column is one of the most common
operations in process industries and is while the most expensive unit
of the amount of energy consumption. Many ideas have been
presented in the related literature for optimizing energy consumption
in distillation columns. This paper studies the different heat
integration methods in a distillation column which separate Benzene,
Toluene, Xylene, and C9+. Three schemes of heat integration
including, indirect sequence (IQ), indirect sequence with forward
energy integration (IQF), and indirect sequence with backward
energy integration (IQB) has been studied in this paper. Using
shortcut method these heat integration schemes were simulated with
Aspen HYSYS software and compared with each other with
regarding economic considerations. The result shows that the energy
consumption has been reduced 33% in IQF and 28% in IQB in
comparison with IQ scheme. Also the economic result shows that the
total annual cost has been reduced 12% in IQF and 8% in IQB
regarding with IQ scheme. Therefore, the IQF scheme is most
economic than IQB and IQ scheme.
Abstract: The present paper discusses the selection of process
parameters for obtaining optimal nanocrystallites size in the CuOZrO2
catalyst. There are some parameters changing the inorganic
structure which have an influence on the role of hydrolysis and
condensation reaction. A statistical design test method is
implemented in order to optimize the experimental conditions of
CuO-ZrO2 nanoparticles preparation. This method is applied for the
experiments and L16 orthogonal array standard. The crystallites size
is considered as an index. This index will be used for the analysis in
the condition where the parameters vary. The effect of pH, H2O/
precursor molar ratio (R), time and temperature of calcination,
chelating agent and alcohol volume are particularity investigated
among all other parameters. In accordance with the results of
Taguchi, it is found that temperature has the greatest impact on the
particle size. The pH and H2O/ precursor molar ratio have low
influences as compared with temperature. The alcohol volume as
well as the time has almost no effect as compared with all other
parameters. Temperature also has an influence on the morphology
and amorphous structure of zirconia. The optimal conditions are
determined by using Taguchi method. The nanocatalyst is studied by
DTA-TG, XRD, EDS, SEM and TEM. The results of this research
indicate that it is possible to vary the structure, morphology and
properties of the sol-gel by controlling the above-mentioned
parameters.
Abstract: Self-organizing map (SOM) is a well known data
reduction technique used in data mining. It can reveal structure in
data sets through data visualization that is otherwise hard to detect
from raw data alone. However, interpretation through visual
inspection is prone to errors and can be very tedious. There are
several techniques for the automatic detection of clusters of code
vectors found by SOM, but they generally do not take into account
the distribution of code vectors; this may lead to unsatisfactory
clustering and poor definition of cluster boundaries, particularly
where the density of data points is low. In this paper, we propose the
use of an adaptive heuristic particle swarm optimization (PSO)
algorithm for finding cluster boundaries directly from the code
vectors obtained from SOM. The application of our method to
several standard data sets demonstrates its feasibility. PSO algorithm
utilizes a so-called U-matrix of SOM to determine cluster boundaries;
the results of this novel automatic method compare very favorably to
boundary detection through traditional algorithms namely k-means
and hierarchical based approach which are normally used to interpret
the output of SOM.
Abstract: Kernel function, which allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, makes the Support Vector Machines (SVM) have been successfully applied in many fields, e.g. classification and regression. The importance of kernel has motivated many studies on its composition. It-s well-known that reproducing kernel (R.K) is a useful kernel function which possesses many properties, e.g. positive definiteness, reproducing property and composing complex R.K by simple operation. There are two popular ways to compute the R.K with explicit form. One is to construct and solve a specific differential equation with boundary value whose handicap is incapable of obtaining a unified form of R.K. The other is using a piecewise integral of the Green function associated with a differential operator L. The latter benefits the computation of a R.K with a unified explicit form and theoretical analysis, whereas there are relatively later studies and fewer practical computations. In this paper, a new algorithm for computing a R.K is presented. It can obtain the unified explicit form of R.K in general reproducing kernel Hilbert space. It avoids constructing and solving the complex differential equations manually and benefits an automatic, flexible and rigorous computation for more general RKHS. In order to validate that the R.K computed by the algorithm can be used in SVM well, some illustrative examples and a comparison between R.K and Gaussian kernel (RBF) in support vector regression are presented. The result shows that the performance of R.K is close or slightly superior to that of RBF.
Abstract: Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.
Abstract: The present paper develops and validates a numerical procedure for the calculation of turbulent combustive flow in converging and diverging ducts and throuh simulation of the heat transfer processes, the amount of production and spread of Nox pollutant has been measured. A marching integration solution procedure employing the TDMA is used to solve the discretized equations. The turbulence model is the Prandtl Mixing Length method. Modeling the combustion process is done by the use of Arrhenius and Eddy Dissipation method. Thermal mechanism has been utilized for modeling the process of forming the nitrogen oxides. Finite difference method and Genmix numerical code are used for numerical solution of equations. Our results indicate the important influence of the limiting diverging angle of diffuser on the coefficient of recovering of pressure. Moreover, due to the intense dependence of Nox pollutant to the maximum temperature in the domain with this feature, the Nox pollutant amount is also in maximum level.
Abstract: In this paper, we study the numerical method for solving second-order fuzzy differential equations using Adomian method under strongly generalized differentiability. And, we present an example with initial condition having four different solutions to illustrate the efficiency of the proposed method under strongly generalized differentiability.
Abstract: Recently, it is found that telegraph equation is more suitable than ordinary diffusion equation in modelling reaction diffusion for such branches of sciences. In this paper, a numerical solution for the one-dimensional hyperbolic telegraph equation by using the collocation method using the septic splines is proposed. The scheme works in a similar fashion as finite difference methods. Test problems are used to validate our scheme by calculate L2-norm and Lā-norm. The accuracy of the presented method is demonstrated by two test problems. The numerical results are found to be in good agreement with the exact solutions.
Abstract: Very few studies have examined performance
implications of strategic alliance announcements in the information
technologies industry from a resource-based view. Furthermore, none
of these studies have investigated resource congruence and alliance
motive as potential sources of abnormal firm performance. This paper
extends upon current resource-based literature to discover and explore
linkages between these concepts and the practical performance of
strategic alliances. This study finds that strategic alliance
announcements have provided overall abnormal positive returns, and
that marketing alliances with marketing resource incongruence have
also contributed to significant firm performance.
Abstract: In this paper, the telegraph equation is solved numerically by cubic B-spline quasi-interpolation .We obtain the numerical scheme, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the temporal derivative of the dependent variable. The advantage of the resulting scheme is that the algorithm is very simple so it is very easy to implement. The results of numerical experiments are presented, and are compared with analytical solutions by calculating errors L2 and Lā norms to confirm the good accuracy of the presented scheme.
Abstract: The purpose of this study was primarily assessing how important economic factors namely: The Thai export price of white rice, the exchange rate, and the world rice consumption affect the overall Thai white rice export, using historical data during the period 1989-2013 from the Thai Rice Exporters Association, and Food and Agricultural Organization of the United Nations. The co-integration method, regression analysis, and error correction model were applied to investigate the econometric model. The findings indicated that in the long-run, the world rice consumption, the exchange rate, and the Thai export price of white rice were the important factors affecting the export quantity of Thai white rice respectively, as indicated by their significant coefficients. Meanwhile, the rice export price was an important factor affecting the export quantity of Thai white rice in the short-run. This information is useful in the business, export opportunities, price competitiveness, and policymaker in Thailand.
Abstract: This paper argues that a product development exercise
involves in addition to the conventional stages, several decisions
regarding other aspects. These aspects should be addressed
simultaneously in order to develop a product that responds to the
customer needs and that helps realize objectives of the stakeholders
in terms of profitability, market share and the like. We present a
framework that encompasses these different development
dimensions. The framework shows that a product development
methodology such as the Quality Function Deployment (QFD) is the
basic tool which allows definition of the target specifications of a
new product. Creativity is the first dimension that enables the
development exercise to live and end successfully. A number of
group processes need to be followed by the development team in
order to ensure enough creativity and innovation. Secondly,
packaging is considered to be an important extension of the product.
Branding strategies, quality and standardization requirements,
identification technologies, design technologies, production
technologies and costing and pricing are also integral parts to the
development exercise. These dimensions constitute the proposed
framework. The paper also presents a mathematical model used to
calculate the design targets based on the target costing principle. The
framework is used to study a case of a new product development in
the telecommunications services sector.
Abstract: We provide a supervised speech-independent voice recognition technique in this paper. In the feature extraction stage we propose a mel-cepstral based approach. Our feature vector classification method uses a special nonlinear metric, derived from the Hausdorff distance for sets, and a minimum mean distance classifier.
Abstract: This paper discusses a new, systematic approach to
the synthesis of a NP-hard class of non-regenerative Boolean
networks, described by FON[FOFF]={mi}[{Mi}], where for every
mj[Mj]ā{mi}[{Mi}], there exists another mk[Mk]ā{mi}[{Mi}], such
that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where
'n' represents the number of distinct primary inputs). The method
automatically ensures exact minimization for certain important selfdual
functions with 2n-1 points in its one-set. The elements meant for
grouping are determined from a newly proposed weighted incidence
matrix. Then the binary value corresponding to the candidate pair is
correlated with the proposed binary value matrix to enable direct
synthesis. We recommend algebraic factorization operations as a post
processing step to enable reduction in literal count. The algorithm
can be implemented in any high level language and achieves best
cost optimization for the problem dealt with, irrespective of the
number of inputs. For other cases, the method is iterated to
subsequently reduce it to a problem of O(n-1), O(n-2),.... and then
solved. In addition, it leads to optimal results for problems exhibiting
higher degree of adjacency, with a different interpretation of the
heuristic, and the results are comparable with other methods.
In terms of literal cost, at the technology independent stage, the
circuits synthesized using our algorithm enabled net savings over
AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of-
Products or ESOP forms) and AND-OR-EXOR logic by 45.57%,
41.78% and 41.78% respectively for the various problems.
Circuit level simulations were performed for a wide variety of
case studies at 3.3V and 2.5V supply to validate the performance of
the proposed method and the quality of the resulting synthesized
circuits at two different voltage corners. Power estimation was
carried out for a 0.35micron TSMC CMOS process technology. In
comparison with AOI logic, the proposed method enabled mean
savings in power by 42.46%. With respect to AND-EXOR logic, the
proposed method yielded power savings to the tune of 31.88%, while
in comparison with AND-OR-EXOR level networks; average power
savings of 33.23% was obtained.
Abstract: Food mileage is one of the important issues concerning environmental sustainability. In this research we have utilized a prototype platform with iterative user-centered testing. With these findings we successfully demonstrate the use of the context of persuasive methods to influence users- attitudes towards the sustainable concept.
Abstract: In the present research, a finite element model is
presented to study the geometrical and material nonlinear behavior of
reinforced concrete plane frames considering soil-structure
interaction. The nonlinear behaviors of concrete and reinforcing steel
are considered both in compression and tension up to failure. The
model takes account also for the number, diameter, and distribution
of rebar along every cross section. Soil behavior is taken into
consideration using four different models; namely: linear-, nonlinear
Winkler's model, and linear-, nonlinear continuum model. A
computer program (NARC) is specially developed in order to
perform the analysis. The results achieved by the present model show
good agreement with both theoretical and experimental published
literature. The nonlinear behavior of a rectangular frame resting on
soft soil up to failure using the proposed model is introduced for
demonstration.
Abstract: By using the method of coincidence degree theory and constructing suitable Lyapunov functional, several sufficient conditions are established for the existence and global exponential stability of anti-periodic solutions for Cohen-Grossberg shunting inhibitory neural networks with delays. An example is given to illustrate our feasible results.
Abstract: Salinity is a measure of the amount of salts in the
water. Total Dissolved Solids (TDS) as salinity parameter are often
determined using laborious and time consuming laboratory tests, but
it may be more appropriate and economical to develop a method
which uses a more simple soil salinity index. Because dissolved ions
increase salinity as well as conductivity, the two measures are
related. The aim of this research was determine of constant
coefficients for predicting of Total Dissolved Solids (TDS) based on
Electrical Conductivity (EC) with Statistics of Correlation
coefficient, Root mean square error, Maximum error, Mean Bias
error, Mean absolute error, Relative error and Coefficient of residual
mass. For this purpose, two experimental areas (S1, S2) of Khuzestan
province-IRAN were selected and four treatments with three
replications by series of double rings were applied. The treatments
were included 25cm, 50cm, 75cm and 100cm water application. The
results showed the values 16.3 & 12.4 were the best constant
coefficients for predicting of Total Dissolved Solids (TDS) based on
EC in Pilot S1 and S2 with correlation coefficient 0.977 & 0.997 and
191.1 & 106.1 Root mean square errors (RMSE) respectively.