Abstract: Energetic and structural results for ethanol-water mixtures as a function of the mole fraction were calculated using Monte Carlo methodology. Energy partitioning results obtained for equimolar water-ethanol mixture and ether organic liquids are compared. It has been shown that at xet=0.22 the RDFs for waterethanol and ethanol-ethanol interactions indicated strong hydrophobic interactions between ethanol molecules and the local structure of solution is less structured at this concentration as at ether ones. Results obtained for ethanol-water mixture as a function of concentration are in good agreement with the experimental data.
Abstract: An immunomodulator bioproduct is prepared in a
batch bioprocess with a modified bacterium Pseudomonas
aeruginosa. The bioprocess is performed in 100 L Bioengineering
bioreactor with 42 L cultivation medium made of peptone, meat
extract and sodium chloride. The optimal bioprocess parameters were
determined: temperature – 37 0C, agitation speed - 300 rpm, aeration
rate – 40 L/min, pressure – 0.5 bar, Dow Corning Antifoam M-max.
4 % of the medium volume, duration - 6 hours. This kind of
bioprocesses are appreciated as difficult to control because their
dynamic behavior is highly nonlinear and time varying. The aim of
the paper is to present (by comparison) different models based on
experimental data.
The analysis criteria were modeling error and convergence rate.
The estimated values and the modeling analysis were done by using
the Table Curve 2D.
The preliminary conclusions indicate Andrews-s model with a
maximum specific growth rate of the bacterium in the range of
0.8 h-1.
Abstract: An Optimal Power Flow based on Improved Particle
Swarm Optimization (OPF-IPSO) with Generator Capability Curve
Constraint is used by NN-OPF as a reference to get pattern of
generator scheduling. There are three stages in Designing NN-OPF.
The first stage is design of OPF-IPSO with generator capability curve
constraint. The second stage is clustering load to specific range and
calculating its index. The third stage is training NN-OPF using
constructive back propagation method. In training process total load
and load index used as input, and pattern of generator scheduling
used as output. Data used in this paper is power system of Java-Bali.
Software used in this simulation is MATLAB.
Abstract: This paper presents an application of level sets for the segmentation of abdominal and thoracic aortic aneurysms in CTA
datasets. An important challenge in reliably detecting aortic is the
need to overcome problems associated with intensity
inhomogeneities. Level sets are part of an important class of methods
that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A kernel function in the
level set formulation aids the suppression of noise in the extracted
regions of interest and then guides the motion of the evolving contour
for the detection of weak boundaries. The speed of curve evolution
has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level
sets, and are shown to be more effective than other approaches in
coping with intensity inhomogeneities. We have applied the Courant
Friedrichs Levy (CFL) condition as stability criterion for our algorithm.
Abstract: Developing a stable early warning system (EWS)
model that is capable to give an accurate prediction is a challenging
task. This paper introduces k-nearest neighbour (k-NN) method
which never been applied in predicting currency crisis before with the
aim of increasing the prediction accuracy. The proposed k-NN
performance depends on the choice of a distance that is used where in
our analysis; we take the Euclidean distance and the Manhattan as a
consideration. For the comparison, we employ three other methods
which are logistic regression analysis (logit), back-propagation neural
network (NN) and sequential minimal optimization (SMO). The
analysis using datasets from 8 countries and 13 macro-economic
indicators for each country shows that the proposed k-NN method
with k = 4 and Manhattan distance performs better than the other
methods.
Abstract: The paper presents a one-dimensional transient
mathematical model of compressible thermal multi-component gas
mixture flows in pipes. The set of the mass, momentum and enthalpy
conservation equations for gas phase is solved. Thermo-physical
properties of multi-component gas mixture are calculated by solving
the Equation of State (EOS) model. The Soave-Redlich-Kwong
(SRK-EOS) model is chosen. Gas mixture viscosity is calculated on
the basis of the Lee-Gonzales-Eakin (LGE) correlation. Numerical
analysis on rapid decompression in conventional dry gases is
performed by using the proposed mathematical model. The model is
validated on measured values of the decompression wave speed in
dry natural gas mixtures. All predictions show excellent agreement
with the experimental data at high and low pressure. The presented
model predicts the decompression in dry natural gas mixtures much
better than GASDECOM and OLGA codes, which are the most
frequently-used codes in oil and gas pipeline transport service.
Abstract: The present paper reports the removal of Cd(II) and
Zn(II) ions using synthetic Zeolit NaA. The adsorption capacity of
the sorbent (Zeolite NaA) strongly depends on simultaneous or not
simultaneous (concurrent) presence of Cd(II) and Zn(II) in the
sorbate. When Cd(II) and Zn(II) are present simultaneously
(concurrently) in the sorbate, Zn(II) ions were sorbed at higher rate.
Equilibrium data fitted Langmuir, Freundlich and Tempkin isotherms
well. The applicability of the isotherm equation to describe the
adsorption process was judged by the correlation coefficients R2. The
Langmuir model yielded the best fit with R2 values equal to or higher
than 0.970, as compared to the Freundlich and Tempkin models. The
fact that 1/n values range from 0.322 to 0.755 indicates that the
adsorption of Cd(II) and Zn(II) ions from aqueous solutions also
favored by the Freundlich model.
Abstract: Latvia is the fourth in the world by means of broadband internet speed. The total number of internet users in Latvia exceeds 70% of its population. The number of active mailboxes of the local internet e-mail service Inbox.lv accounts for 68% of the population and 97.6% of the total number of internet users. The Latvian portal Draugiem.lv is a phenomenon of social media, because 58.4 % of the population and 83.5% of internet users use it. A majority of Latvian company profiles are available on social networks, the most popular being Twitter.com. These and other parameters prove the fact consumers and companies are actively using the Internet.
However, after the authors in a number of studies analyzed how enterprises are employing the e-environment, namely, e-environment tools, they arrived to the conclusions that are not as flattering as the aforementioned statistics. There is an obvious contradiction between the statistical data and the actual studies. As a result, the authors have posed a question: Why are entrepreneurs resistant to e-tools? In order to answer this question, the authors have addressed the Technology Acceptance Model (TAM). The authors analyzed each phase and determined several factors affecting the use of e-environment, reaching the main conclusion that entrepreneurs do not have a sufficient level of e-literacy (digital literacy).
The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistic method, factor analysis in SPSS 20 environment etc.
The theoretical and methodological background of the research is formed by, scientific researches and publications, that from the mass media and professional literature, statistical information from legal institutions as well as information collected by the author during the survey.
Abstract: This research aims to create a model for analysis of student motivation behavior on e-Learning based on association rule mining techniques in case of the Information Technology for Communication and Learning Course at Suan Sunandha Rajabhat University. The model was created under association rules, one of the data mining techniques with minimum confidence. The results showed that the student motivation behavior model by using association rule technique can indicate the important variables that influence the student motivation behavior on e-Learning.
Abstract: Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.
Abstract: One of the difficulties of the vibration-based damage identification methods is the nonuniqueness of the results of damage identification. The different damage locations and severity may cause the identical response signal, which is even more severe for detection of the multiple damage. This paper proposes a new strategy for damage detection to avoid this nonuniqueness. This strategy firstly determines the approximates damage area based on the statistical pattern recognition method using the dynamic strain signal measured by the distributed fiber Bragg grating, and then accurately evaluates the damage information based on the Bayesian model updating method using the experimental modal data. The stochastic simulation method is then used to compute the high-dimensional integral in the Bayesian problem. Finally, an experiment of the plate structure, simulating one part of mechanical structure, is used to verify the effectiveness of this approach.
Abstract: Today's business environment requires that companies have access to highly relevant information in a matter of seconds.
Modern Business Intelligence tools rely on data structured mostly in traditional dimensional database schemas, typically represented by
star schemas. Dimensional modeling is already recognized as a
leading industry standard in the field of data warehousing although
several drawbacks and pitfalls were reported. This paper focuses on
the analysis of another data warehouse modeling technique - the
anchor modeling, and its characteristics in context with the standardized dimensional modeling technique from a query performance perspective. The results of the analysis show
information about performance of queries executed on database
schemas structured according to principles of each database modeling
technique.
Abstract: Twelve lactating Etawah Crossedbred goats were used
in this study. Goat feed consisted of Cally andra callothyrsus,
Pennisetum purpureum, wheat bran and dried fermented cassava
peel. The cassava peels were fermented with a traditional culture
called “ragi tape" (mixed culture of Saccharomyces cerevisae,
Aspergillus sp, Candida, Hasnula and Acetobacter). The goats were
divided into 2 groups (Control and Treated) of six does. The
experimental diet of the Control group consisted of 70% of roughage
(fresh Callyandra callothyrsus and Pennisetum purpureum 60:40)
and 30% of wheat bran on dry matter (DM) base. In the Treated
group 30% of wheat bran was replaced with dried fermented cassava
peels. Data were statistically analyzed using analysis of variance
followed SPSS program. The concentration of HCN in fermented
cassava peel decreased to non toxic level. Nutrient composition of
dried fermented cassava peel consisted of 85.75% dry matter;
5.80% crude protein and 82.51% total digestible nutrien (TDN).
Substitution of 30% of wheat bran with dried fermented cassava peel
in the diet had no effect on dry matter and organic matter intake but
significantly (P< 0.05) decreased crude protein and TDN
consumption as well as milk yields and milk composition. The study
recommended to reduced the level of substitution to less than 30% of
concentrates in the diet in order to avoid low nutrient intake and milk
production of goats.
Abstract: Wireless Sensor Networks (WSNs) are used to monitor/observe vast inaccessible regions through deployment of large number of sensor nodes in the sensing area. For majority of WSN applications, the collected data needs to be combined with geographic information of its origin to make it useful for the user; information received from remote Sensor Nodes (SNs) that are several hops away from base station/sink is meaningless without knowledge of its source. In addition to this, location information of SNs can also be used to propose/develop new network protocols for WSNs to improve their energy efficiency and lifetime. In this paper, range free localization protocols for WSNs have been proposed. The proposed protocols are based on weighted centroid localization technique, where the edge weights of SNs are decided by utilizing fuzzy logic inference for received signal strength and link quality between the nodes. The fuzzification is carried out using (i) Mamdani, (ii) Sugeno, and (iii) Combined Mamdani Sugeno fuzzy logic inference. Simulation results demonstrate that proposed protocols provide better accuracy in node localization compared to conventional centroid based localization protocols despite presence of unintentional radio frequency interference from radio frequency (RF) sources operating in same frequency band.
Abstract: Visualizing sound and noise often help us to determine
an appropriate control over the source localization. Near-field acoustic
holography (NAH) is a powerful tool for the ill-posed problem.
However, in practice, due to the small finite aperture size, the discrete
Fourier transform, FFT based NAH couldn-t predict the activeregion-
of-interest (AROI) over the edges of the plane. Theoretically
few approaches were proposed for solving finite aperture problem.
However most of these methods are not quite compatible for the
practical implementation, especially near the edge of the source. In
this paper, a zip-stuffing extrapolation approach has suggested with
2D Kaiser window. It is operated on wavenumber complex space
to localize the predicted sources. We numerically form a practice
environment with touch impact databases to test the localization of
sound source. It is observed that zip-stuffing aperture extrapolation
and 2D window with evanescent components provide more accuracy
especially in the small aperture and its derivatives.
Abstract: This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Abstract: Experimental liquid-liquid equilibra of butan-2-ol -
ethanol -water; pentan-1-ol - ethanol - water and toluene - acetone -
water ternary systems were investigated at (25oC). The reliability of
the experimental tie-line data was ascertained by using Othmer-Tobias
and Hand plots. The distribution coefficients (D) and separation
factors (S) of the immiscibility region were evaluated for the three
systems.
Abstract: In this paper, we have presented the effect of varying
time-delays on performance and stability in the single-channel multirate
sampled-data system in hard real-time (RT-Linux) environment.
The sampling task require response time that might exceed the
capacity of RT-Linux. So a straight implementation with RT-Linux is
not feasible, because of the latency of the systems and hence,
sampling period should be less to handle this task. The best sampling
rate is chosen for the sampled-data system, which is the slowest rate
meets all performance requirements. RT-Linux is consistent with its
specifications and the resolution of the real-time is considered 0.01
seconds to achieve an efficient result. The test results of our
laboratory experiment shows that the multi-rate control technique in
hard real-time operating system (RTOS) can improve the stability
problem caused by the random access delays and asynchronization.
Abstract: Data Mining aims at discovering knowledge out of
data and presenting it in a form that is easily comprehensible to
humans. One of the useful applications in Egypt is the Cancer
management, especially the management of Acute Lymphoblastic
Leukemia or ALL, which is the most common type of cancer in
children.
This paper discusses the process of designing a prototype that can
help in the management of childhood ALL, which has a great
significance in the health care field. Besides, it has a social impact
on decreasing the rate of infection in children in Egypt. It also
provides valubale information about the distribution and
segmentation of ALL in Egypt, which may be linked to the possible
risk factors.
Undirected Knowledge Discovery is used since, in the case of this
research project, there is no target field as the data provided is
mainly subjective. This is done in order to quantify the subjective
variables. Therefore, the computer will be asked to identify
significant patterns in the provided medical data about ALL. This
may be achieved through collecting the data necessary for the
system, determimng the data mining technique to be used for the
system, and choosing the most suitable implementation tool for the
domain.
The research makes use of a data mining tool, Clementine, so as to
apply Decision Trees technique. We feed it with data extracted from
real-life cases taken from specialized Cancer Institutes. Relevant
medical cases details such as patient medical history and diagnosis
are analyzed, classified, and clustered in order to improve the disease
management.
Abstract: With the advent of digital cinema and digital
broadcasting, copyright protection of video data has been one of the
most important issues.
We present a novel method of watermarking for video image data
based on the hardware and digital wavelet transform techniques and
name it as “traceable watermarking" because the watermarked data is
constructed before the transmission process and traced after it has been
received by an authorized user.
In our method, we embed the watermark to the lowest part of each
image frame in decoded video by using a hardware LSI.
Digital Cinema is an important application for traceable
watermarking since digital cinema system makes use of watermarking
technology during content encoding, encryption, transmission,
decoding and all the intermediate process to be done in digital cinema
systems. The watermark is embedded into the randomly selected
movie frames using hash functions.
Embedded watermark information can be extracted from the
decoded video data. For that, there is no need to access original movie
data. Our experimental results show that proposed traceable
watermarking method for digital cinema system is much better than the
convenient watermarking techniques in terms of robustness, image
quality, speed, simplicity and robust structure.