Abstract: Plasma Wind Tunnels (PWT) are extensively used for screening and qualification of re-entry Thermel Protection System (TPS) materials. Proper design of a supersonic diffuser for plasma wind tunnel is of importance for achieving good pressurerecovery (thereby reducing vacuum pumping requirement & run time costs) and isolating downstream stream fluctuations from propagating costs) and isolating downstream stream fluctuationnts the details of a rapid design methodology successfully employed for designing supersonic diffuser for high power (several megawatts)plasma wind tunnels and numerical performance analysis of a diffuser configuration designed for one megawatt power rated plasma wind tunnel(enthalpy ~ 30 MJ/kg) using FLUENT 6.3® solver for different diffuser operating sub-atmospheric back-pressures.
Abstract: SQL injection on web applications is a very popular
kind of attack. There are mechanisms such as intrusion detection
systems in order to detect this attack. These strategies often rely on
techniques implemented at high layers of the application but do not
consider the low level of system calls. The problem of only
considering the high level perspective is that an attacker can
circumvent the detection tools using certain techniques such as URL
encoding. One technique currently used for detecting low-level
attacks on privileged processes is the tracing of system calls. System
calls act as a single gate to the Operating System (OS) kernel; they
allow catching the critical data at an appropriate level of detail. Our
basic assumption is that any type of application, be it a system
service, utility program or Web application, “speaks” the language of
system calls when having a conversation with the OS kernel. At this
level we can see the actual attack while it is happening. We conduct
an experiment in order to demonstrate the suitability of system call
analysis for detecting SQL injection. We are able to detect the attack.
Therefore we conclude that system calls are not only powerful in
detecting low-level attacks but that they also enable us to detect highlevel
attacks such as SQL injection.
Abstract: Intermittent aeration process can be easily applied on
the existing activated sludge system and is highly reliable against the loading changes. It can be operated in a relatively simple way as well.
Since the moving-bed biofilm reactor method processes pollutants by attaching and securing the microorganisms on the media, the process
efficiency can be higher compared to the suspended growth biological
treatment process, and can reduce the return of sludge. In this study,
the existing intermittent aeration process with alternating flow being
applied on the oxidation ditch is applied on the continuous flow stirred tank reactor with advantages from both processes, and we would like
to develop the process to significantly reduce the return of sludge in the clarifier and to secure the reliable quality of treated water by
adding the moving media. Corresponding process has the appropriate
form as an infrastructure based on u- environment in future u- City and
is expected to accelerate the implementation of u-Eco city in conjunction with city based services. The system being conducted in a
laboratory scale has been operated in HRT 8hours except for the final
clarifier and showed the removal efficiency of 97.7 %, 73.1 % and 9.4
% in organic matters, TN and TP, respectively with operating range of
4hour cycle on system SRT 10days. After adding the media, the removal efficiency of phosphorus showed a similar level compared to
that before the addition, but the removal efficiency of nitrogen was
improved by 7~10 %. In addition, the solids which were maintained in
MLSS 1200~1400 at 25 % of media packing were attached all onto the
media, which produced no sludge entering the clarifier. Therefore, the
return of sludge is not needed any longer.
Abstract: In order to implement flexibility as well as survivable
capacities over passive optical network (PON), a new automatic
random fault-recovery mechanism with array-waveguide-grating
based (AWG-based) optical switch (OSW) is presented. Firstly,
wavelength-division-multiplexing and optical code-division
multiple-access (WDM/OCDMA) scheme are configured to meet the
various geographical locations requirement between optical network
unit (ONU) and optical line terminal (OLT). The AWG-base optical
switch is designed and viewed as central star-mesh topology to
prohibit/decrease the duplicated redundant elements such as fiber and
transceiver as well. Hence, by simple monitoring and routing switch
algorithm, random fault-recovery capacity is achieved over
bi-directional (up/downstream) WDM/OCDMA scheme. When error
of distribution fiber (DF) takes place or bit-error-rate (BER) is higher
than 10-9 requirement, the primary/slave AWG-based OSW are
adjusted and controlled dynamically to restore the affected ONU
groups via the other working DFs immediately.
Abstract: In this paper, linear multistep technique using power
series as the basis function is used to develop the block methods
which are suitable for generating direct solution of the special second
order ordinary differential equations of the form y′′ = f(x,y), a < = x < = b with associated initial or boundary conditions. The continuaous hybrid formulations enable us to differentiate and evaluate at some
grids and off – grid points to obtain two different three discrete
schemes, each of order (4,4,4)T, which were used in block form for
parallel or sequential solutions of the problems. The computational
burden and computer time wastage involved in the usual reduction of
second order problem into system of first order equations are avoided
by this approach. Furthermore, a stability analysis and efficiency of
the block method are tested on linear and non-linear ordinary
differential equations whose solutions are oscillatory or nearly
periodic in nature, and the results obtained compared favourably with
the exact solution.
Abstract: In the present study, a steady-state simulation model
has been developed to evaluate the system performance of a
transcritical carbon dioxide heat pump system for simultaneous water
cooling and heating. Both the evaporator (including both two-phase
and superheated zone) and gas cooler models consider the highly
variable heat transfer characteristics of CO2 and pressure drop. The
numerical simulation model of transcritical CO2 heat pump has been
validated by test data obtained from experiments on the heat pump
prototype. Comparison between the test results and the model
prediction for system COP variation with compressor discharge
pressure shows a modest agreement with a maximum deviation of
15% and the trends are fairly similar. Comparison for other operating
parameters also shows fairly similar deviation between the test
results and the model prediction. Finally, the simulation results are
presented to study the effects of operating parameters such as,
temperature of heat exchanger fluid at the inlet, discharge pressure,
compressor speed on system performance of CO2 heat pump, suitable
in a dairy plant where simultaneous cooling at 4oC and heating at
73oC are required. Results show that good heat transfer properties of
CO2 for both two-phase and supercritical region and efficient
compression process contribute a lot for high system COPs.
Abstract: Research into the problem of classification of sonar signals has been taken up as a challenging task for the neural networks. This paper investigates the design of an optimal classifier using a Multi layer Perceptron Neural Network (MLP NN) and Support Vector Machines (SVM). Results obtained using sonar data sets suggest that SVM classifier perform well in comparison with well-known MLP NN classifier. An average classification accuracy of 91.974% is achieved with SVM classifier and 90.3609% with MLP NN classifier, on the test instances. The area under the Receiver Operating Characteristics (ROC) curve for the proposed SVM classifier on test data set is found as 0.981183, which is very close to unity and this clearly confirms the excellent quality of the proposed classifier. The SVM classifier employed in this paper is implemented using kernel Adatron algorithm is seen to be robust and relatively insensitive to the parameter initialization in comparison to MLP NN.
Abstract: The most reliable and accurate description of the actual behavior of a software system is its source code. However, not all questions about the system can be answered directly by resorting to this repository of information. What the reverse engineering methodology aims at is the extraction of abstract, goal-oriented “views" of the system, able to summarize relevant properties of the computation performed by the program. While concentrating on reverse engineering we had modeled the C++ files by designing the translator.
Abstract: Information and communication technology (ICT) has
become, within a very short time, one of the basic building blocks of
modern society. Many countries now understanding the importance
of ICT and mastering the basic skills and concepts of it as part of the
core of education. Organizations, experts and practitioners in the
education sector increasingly recognizing the importance of ICT in
supporting educational improvement and reform. This paper
addresses the convergence of ICT and education. When two
technologies are converging to each other, together they will generate
some great opportunities and challenges. This paper focuses on these
issues. In introduction section, it explains the ICT, education, and
ICT-enhanced education. In next section it describes need of ICT in
education, relationship between ICT skills and education, and stages
of teaching learning process. The next two sections describe
opportunities and challenges in integrating ICT in education. Finally
the concluding section summaries the idea and its usefulness.
Abstract: Understanding proteins functions is a major goal in
the post-genomic era. Proteins usually work in context of other
proteins and rarely function alone. Therefore, it is highly relevant to
study the interaction partners of a protein in order to understand its
function. Machine learning techniques have been widely applied to
predict protein-protein interactions. Kernel functions play an
important role for a successful machine learning technique. Choosing
the appropriate kernel function can lead to a better accuracy in a
binary classifier such as the support vector machines. In this paper,
we describe a Bayesian kernel for the support vector machine to
predict protein-protein interactions. The use of Bayesian kernel can
improve the classifier performance by incorporating the probability
characteristic of the available experimental protein-protein
interactions data that were compiled from different sources. In
addition, the probabilistic output from the Bayesian kernel can assist
biologists to conduct more research on the highly predicted
interactions. The results show that the accuracy of the classifier has
been improved using the Bayesian kernel compared to the standard
SVM kernels. These results imply that protein-protein interaction can
be predicted using Bayesian kernel with better accuracy compared to
the standard SVM kernels.
Abstract: An enhanced particle swarm optimization algorithm
(PSO) is presented in this work to solve the non-convex OPF
problem that has both discrete and continuous optimization variables.
The objective functions considered are the conventional quadratic
function and the augmented quadratic function. The latter model
presents non-differentiable and non-convex regions that challenge
most gradient-based optimization algorithms. The optimization
variables to be optimized are the generator real power outputs and
voltage magnitudes, discrete transformer tap settings, and discrete
reactive power injections due to capacitor banks. The set of equality
constraints taken into account are the power flow equations while the
inequality ones are the limits of the real and reactive power of the
generators, voltage magnitude at each bus, transformer tap settings,
and capacitor banks reactive power injections. The proposed
algorithm combines PSO with Newton-Raphson algorithm to
minimize the fuel cost function. The IEEE 30-bus system with six
generating units is used to test the proposed algorithm. Several cases
were investigated to test and validate the consistency of detecting
optimal or near optimal solution for each objective. Results are
compared to solutions obtained using sequential quadratic
programming and Genetic Algorithms.
Abstract: Breastfeeding is an important concept in the maternal life of a woman. In this paper, we focus on exclusive breastfeeding. Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. This type of breastfeeding is very important during the first six months because it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, in Mauritius, exclusive breastfeeding has decreased the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we give an overview of exclusive breastfeeding in Mauritius and the factors influencing it. We further analyze the local practices of exclusive breastfeeding using the Generalized Poisson regression model and the negative-binomial model since the data are over-dispersed.
Abstract: This paper discusses applications of a revolutionary
information technology, Geographic Information Systems (GIS), in
the field of the history of cartography by examples, including
assessing accuracy of early maps, establishing a database of places
and historical administrative units in history, integrating early maps
in GIS or digital images, and analyzing social, political, and
economic information related to production of early maps. GIS
provides a new mean to evaluate the accuracy of early maps. Four
basic steps using GIS for this type of study are discussed. In addition,
several historical geographical information systems are introduced.
These include China Historical Geographic Information Systems
(CHGIS), the United States National Historical Geographic
Information System (NHGIS), and the Great Britain Historical
Geographical Information System. GIS also provides digital means to
display and analyze the spatial information on the early maps or to
layer them with modern spatial data. How GIS relational data
structure may be used to analyze social, political, and economic
information related to production of early maps is also discussed in
this paper. Through discussion on these examples, this paper reveals
value of GIS applications in this field.
Abstract: Solution for the complete removal of carbon
monoxide from the exhaust gases still poses a challenge to the
researchers and this problem is still under development. Modeling for
reduction of carbon monoxide is carried out using heterogeneous
reaction using low cost non-noble metal based catalysts for the
purpose of controlling emissions released to the atmosphere. A
simple one-dimensional model was developed for the monolith using
hopcalite catalyst. The converter is assumed to be an adiabatic
monolith operating under warm-up conditions. The effect of inlet gas
temperatures and catalyst loading on carbon monoxide reduction
during cold start period in the converter is analysed.
Abstract: Due to the recovering global economy, enterprises are
increasingly focusing on logistics. Investing in logistic measures for
a production generates a large potential for achieving a good starting
point within a competitive field. Unlike during the global economic
crisis, enterprises are now challenged with investing available capital
to maximize profits. In order to be able to create an informed and
quantifiably comprehensible basis for a decision, enterprises need an
adequate model for logistically and monetarily evaluating measures
in production. The Collaborate Research Centre 489 (SFB 489) at the
Institute for Production Systems (IFA) developed a Logistic
Information System which provides support in making decisions and
is designed specifically for the forging industry. The aim of a project
that has been applied for is to now transfer this process in order to
develop a universal approach to logistically and monetarily evaluate
measures in production.
Abstract: Industrial radiography is a famous technique for the identification and evaluation of discontinuities, or defects, such as cracks, porosity and foreign inclusions found in welded joints. Although this technique has been well developed, improving both the inspection process and operating time, it does suffer from several drawbacks. The poor quality of radiographic images is due to the physical nature of radiography as well as small size of the defects and their poor orientation relatively to the size and thickness of the evaluated parts. Digital image processing techniques allow the interpretation of the image to be automated, avoiding the presence of human operators making the inspection system more reliable, reproducible and faster. This paper describes our attempt to develop and implement digital image processing algorithms for the purpose of automatic defect detection in radiographic images. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of global and local preprocessing and segmentation methods must be appropriated.
Abstract: As networking has become popular, Web-learning
tends to be a trend while designing a tool. Moreover, five-axis
machining has been widely used in industry recently; however, it has
potential axial table colliding problems. Thus this paper aims at
proposing an efficient web-learning collision detection tool on
five-axis machining. However, collision detection consumes heavy
resource that few devices can support, thus this research uses a
systematic approach based on web knowledge to detect collision. The
methodologies include the kinematics analyses for five-axis motions,
separating axis method for collision detection, and computer
simulation for verification. The machine structure is modeled as STL
format in CAD software. The input to the detection system is the
g-code part program, which describes the tool motions to produce the
part surface. This research produced a simulation program with C
programming language and demonstrated a five-axis machining
example with collision detection on web site. The system simulates the
five-axis CNC motion for tool trajectory and detects for any collisions
according to the input g-codes and also supports high-performance
web service benefiting from C. The result shows that our method
improves 4.5 time of computational efficiency, comparing to the
conventional detection method.
Abstract: This research contribution is drafted to present the
orbit design, orbit propagator and geomagnetic field estimator for the
nanosatellites specifically for the upcoming CUBESAT, ICUBE-1 of
the Institute of Space Technology (IST), Islamabad, Pakistan. The
ICUBE mission is designed for the low earth orbit at the approximate
height of 700KM. The presented research endeavor designs the
Keplarian elements for ICUBE-1 orbit while incorporating the
mission requirements and propagates the orbit using J2 perturbations,
The attitude determination system of the ICUBE-1 consists of
attitude determination sensors like magnetometer and sun sensor. The
Geomagnetic field estimator is developed according to the model of
International Geomagnetic Reference Field (IGRF) for comparing the
magnetic field measurements by the magnetometer for attitude
determination. The output of the propagator namely the Keplarians
position and velocity vectors and the magnetic field vectors are
compared and verified with the same scenario generated in the
Satellite Tool Kit (STK).
Abstract: Segmentation techniques based on Active Contour
Models have been strongly benefited from the use of prior information
during their evolution. Shape prior information is captured from
a training set and is introduced in the optimization procedure to
restrict the evolution into allowable shapes. In this way, the evolution
converges onto regions even with weak boundaries. Although
significant effort has been devoted on different ways of capturing
and analyzing prior information, very little thought has been devoted
on the way of combining image information with prior information.
This paper focuses on a more natural way of incorporating the
prior information in the level set framework. For proof of concept
the method is applied on hippocampus segmentation in T1-MR
images. Hippocampus segmentation is a very challenging task, due
to the multivariate surrounding region and the missing boundary
with the neighboring amygdala, whose intensities are identical. The
proposed method, mimics the human segmentation way and thus
shows enhancements in the segmentation accuracy.
Abstract: In this paper the reference current for Voltage Source
Converter (VSC) of the Shunt Active Power Filter (SAPF) is
generated using Synchronous Reference Frame method,
incorporating the PI controller with anti-windup scheme. The
proposed method improves the harmonic filtering by compensating
the winding up phenomenon caused by the integral term of the PI
controller.
Using Reference Frame Transformation, the current is transformed
from om a - b - c stationery frame to rotating 0 - d - q frame. Using
the PI controller, the current in the 0 - d - q frame is controlled to
get the desired reference signal. A controller with integral action
combined with an actuator that becomes saturated can give some
undesirable effects. If the control error is so large that the integrator
saturates the actuator, the feedback path becomes ineffective because
the actuator will remain saturated even if the process output changes.
The integrator being an unstable system may then integrate to a very
large value, the phenomenon known as integrator windup.
Implementing the integrator anti-windup circuit turns off the
integrator action when the actuator saturates, hence improving the
performance of the SAPF and dynamically compensating harmonics
in the power network. In this paper the system performance is
examined with Shunt Active Power Filter simulation model.