Abstract: In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.
Abstract: The three-phase power system suffers from different challenging problems, e.g. voltage unbalance conditions at the load side. The voltage unbalance usually degrades the power quality of the electric power system. Several techniques can be considered for load balancing including load reconfiguration, static synchronous compensator and static reactive power compensator. In this work an efficient neural network is designed to control the unbalanced condition in the Aqaba-Qatrana-South Amman (AQSA) electric power system. It is designed for highly enhanced response time of the reactive compensator for voltage balancing. The neural network is developed to determine the appropriate set of firing angles required for the thyristor-controlled reactor to balance the three load voltages accurately and quickly. The parameters of AQSA power system are considered in the laboratory model, and several test cases have been conducted to test and validate the proposed technique capabilities. The results have shown a high performance of the proposed Neural Network Control (NNC) technique for correcting the voltage unbalance conditions at three-phase load based on accuracy and response time.
Abstract: Urban flooding resulting from a sudden release of
water due to dam-break or excessive rainfall is a serious threatening
environment hazard, which causes loss of human life and large
economic losses. Anticipating floods before they occur could
minimize human and economic losses through the implementation
of appropriate protection, provision, and rescue plans. This work
reports on the numerical modelling of flash flood propagation
in urban areas after an excessive rainfall event or dam-break.
A two-dimensional (2D) depth-averaged shallow water model is
used with a refined unstructured grid of triangles for representing
the urban area topography. The 2D shallow water equations are
solved using a second-order well-balanced discontinuous Galerkin
scheme. Theoretical test case and three flood events are described
to demonstrate the potential benefits of the scheme: (i) wetting and
drying in a parabolic basin (ii) flash flood over a physical model of
the urbanized Toce River valley in Italy; (iii) wave propagation on
the Reyran river valley in consequence of the Malpasset dam-break
in 1959 (France); and (iv) dam-break flood in October 1982 at the
town of Sumacarcel (Spain). The capability of the scheme is also
verified against alternative models. Computational results compare
well with recorded data and show that the scheme is at least as
efficient as comparable second-order finite volume schemes, with
notable efficiency speedup due to parallelization.
Abstract: One of the areas that present an opportunity to reduce the national carbon emission is the energy management of public buildings. To our present knowledge, there is no easy-to-use and centralized mechanism that enables the government to monitor the overall energy performance, as well as the carbon footprint, of Malaysia’s public buildings. Therefore, the Public Works Department Malaysia, or PWD, has developed a web-based energy performance reporting tool called JENOSYS (JKR Energy Online System), which incorporates a database of utility account numbers acquired from the utility service provider for analysis and reporting. For test case purposes, 23 buildings under PWD were selected and monitored for their monthly energy performance (in kWh), carbon emission reduction (in tCO₂eq) and utility cost (in MYR), against the baseline. This paper demonstrates the simplicity with which buildings without energy metering can be monitored centrally and the benefits that can be accrued by the government in terms of building energy disclosure and concludes with the recommendation of expanding the system to all the public buildings in Malaysia.
Abstract: Multi-modal film boiling simulations are carried out on adaptive octree grids. The liquid-vapor interface is captured using the volume-of-fluid framework adjusted to account for exchanges of mass, momentum, and energy across the interface. Surface tension effects are included using a volumetric source term in the momentum equations. The phase change calculations are conducted based on the exact location and orientation of the interface; however, the source terms are calculated using the mixture variables to be consistent with the one field formulation used to represent the entire fluid domain. The numerical model on octree representation of the computational grid is first verified using test cases including advection tests in severely deforming velocity fields, gravity-based instabilities and bubble growth in uniformly superheated liquid under zero gravity. The model is then used to simulate both single and multi-modal film boiling simulations. The octree grid is dynamically adapted in order to maintain the highest grid resolution on the instability fronts using markers of interface location, volume fraction, and thermal gradients. The method thus provides an efficient platform to simulate fluid instabilities with or without phase change in the presence of body forces like gravity or shear layer instabilities.
Abstract: The north-eastern, Himalayan, and Eastern Ghats Belt
of India comprise of earthquake-prone, remote, and hilly terrains.
Earthquakes have caused enormous damages in these regions in the
past. A wireless sensor network based earthquake early warning
system (EEWS) is being developed to mitigate the damages caused
by earthquakes. It consists of sensor nodes, distributed over the
region, that perform majority voting of the output of the seismic
sensors in the vicinity, and relay a message to a base station to alert
the residents when an earthquake is detected. At the heart of the
EEWS is a low-power two-stage seismic sensor that continuously
tracks seismic events from incoming three-axis accelerometer signal
at the first-stage, and, in the presence of a seismic event, triggers
the second-stage P-wave detector that detects the onset of P-wave
in an earthquake event. The parameters of the P-wave detector have
been optimized for minimizing detection time and maximizing the
accuracy of detection.Working of the sensor scheme has been verified
with seven earthquakes data retrieved from IRIS. In all test cases, the
scheme detected the onset of P-wave accurately. Also, it has been
established that the P-wave onset detection time reduces linearly with
the sampling rate. It has been verified with test data; the detection
time for data sampled at 10Hz was around 2 seconds which reduced
to 0.3 second for the data sampled at 100Hz.
Abstract: This paper is a continuation of the work carried out by various turbulence modelers in Oceanography on the topic of oceanic turbulent mixing. It evaluates the evolution of ocean water temperature and salinity by the appropriate modeling of turbulent mixing utilizing proper prescription of eddy viscosity. Many modelers in past have suggested including terms like shear, buoyancy and vorticity to be the parameters that decide the slow pressure strain correlation. We add to it the fact that dissipation anisotropy also modifies the correlation through eddy viscosity parameterization. This recalibrates the established correlation constants slightly and gives improved results. This anisotropization of dissipation implies that the critical Richardson’s number increases much beyond unity (to 1.66) to accommodate enhanced mixing, as is seen in reality. The model is run for a couple of test cases in the General Ocean Turbulence Model (GOTM) and the results are presented here.
Abstract: Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.
Abstract: Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution.
Abstract: Non-parametric reliability technique is useful for assessment of reliability of systems for which failure rates are not available. This is useful when detection of malfunctioning of any component is the key purpose during ongoing operation of the system. The main purpose of the Heat Exchanger Cycle discussed in this paper is to provide hot water at a constant temperature for longer periods of time. In such a cycle, certain components play a crucial role and this paper presents an effective way to predict the malfunctioning of the components by determination of system reliability. The method discussed in the paper is feasible and this is clarified with the help of various test cases.
Abstract: The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.
Abstract: In aircraft design, the jump from the conceptual to
preliminary design stage introduces a level of complexity which
cannot be realistically handled by a single optimiser, be that a
human (chief engineer) or an algorithm. The design process is often
partitioned along disciplinary lines, with each discipline given a level
of autonomy. This introduces a number of challenges including, but
not limited to: coupling of design variables; coordinating disciplinary
teams; handling of large amounts of analysis data; reaching an
acceptable design within time constraints. A number of classical
Multidisciplinary Design Optimisation (MDO) architectures exist in
academia specifically designed to address these challenges. Their
limited use in the industrial aircraft design process has inspired
the authors of this paper to develop an alternative strategy based
on well established ideas from Decision Support Systems. The
proposed rule based architecture sacrifices possibly elusive guarantees
of convergence for an attractive return in simplicity. The method
is demonstrated on analytical and aircraft design test cases and its
performance is compared to a number of classical distributed MDO
architectures.
Abstract: Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors. In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template. We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE. We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators. Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.
Abstract: Change requirement traceability in object oriented software systems is one of the challenging areas in research. We know that the traces between links of different artifacts are to be automated or semi-automated in the software development life cycle (SDLC). The aim of this paper is discussing and implementing aspects of dynamically linking the artifacts such as requirements, high level design, code and test cases through the Extensible Markup Language (XML) or by dynamically generating Object Oriented (OO) metrics. Also, non-functional requirements (NFR) aspects such as stability, completeness, clarity, validity, feasibility and precision are discussed. We discuss this as a Fifth Taxonomy, which is a system vulnerability concern.
Abstract: A recently developed one-equation turbulence model
has been successfully applied to simulate turbulent flows with
various complexities. The model, which is based on the
transformation of the k-ε closure, is wall-distance free and equipped
with lagging destruction/dissipation terms. Test cases included shockboundary-
layer interaction flows over the NACA 0012 airfoil, an
axisymmetric bump, and the ONERA M6 wing. The capability of the
model to operate in a Scale Resolved Simulation (SRS) mode is
demonstrated through the simulation of a massive flow separation
over a circular cylinder at Re= 1.2 x106. An assessment of the results
against available experiments Menter (k-ε)1Eq and the Spalart-
Allmaras model that belongs to the single equation closure family is
made.
Abstract: Test automation allows performing difficult and time
consuming manual software testing tasks efficiently, quickly and
repeatedly. However, development and maintenance of automated
tests is expensive, so it needs a proper prioritization what to automate
first. This paper describes a simple yet efficient approach for such
prioritization of test cases based on the effort needed for both manual
execution and software test automation. The suggested approach is
very flexible because it allows working with a variety of assessment
methods, and adding or removing new candidates at any time. The
theoretical ideas presented in this article have been successfully
applied in real world situations in several software companies by the
authors and their colleagues including testing of real estate websites,
cryptographic and authentication solutions, OSGi-based middleware
framework that has been applied in various systems for smart homes,
connected cars, production plants, sensors, home appliances, car head
units and engine control units (ECU), vending machines, medical
devices, industry equipment and other devices that either contain or
are connected to an embedded service gateway.
Abstract: Software testing has become a mandatory process in
assuring the software product quality. Hence, test management is
needed in order to manage the test activities conducted in the
software test life cycle. This paper discusses on the challenges faced
in the software test life cycle, and how the test processes and test
activities, mainly on test cases creation, test execution, and test
reporting is being managed and automated using several test
automation tools, i.e. Jira, Robot Framework, and Jenkins.
Abstract: A novel Active Flap System (AFS) has been developed
at DTU Wind Energy, as a result of a 3-year R&D project following
almost 10 years of innovative research in this field. The full scale AFS
comprises an active deformable trailing edge has been tested at the
unique rotating test facility at the Risø Campus of DTU Wind Energy
in Denmark. The design and instrumentation of the wing section and
the AFS are described. The general description and objectives of the
rotating test rig at the Risø campus of DTU are presented, along
with an overview of sensors on the setup and the test cases. The
post-processing of data is discussed and results of steady, flap step
and azimuth control flap cases are presented.
Abstract: In this paper, we present a model-based regression test
suite reducing approach that uses EFSM model dependence analysis
and probability-driven greedy algorithm to reduce software regression
test suites. The approach automatically identifies the difference
between the original model and the modified model as a set of
elementary model modifications. The EFSM dependence analysis is
performed for each elementary modification to reduce the regression
test suite, and then the probability-driven greedy algorithm is adopted
to select the minimum set of test cases from the reduced regression test
suite that cover all interaction patterns. Our initial experience shows
that the approach may significantly reduce the size of regression test
suites.
Abstract: In this paper, we propose an automatic verification
technology of software patches for user virtual environments on IaaS
Cloud to decrease verification costs of patches. In these days, IaaS
services have been spread and many users can customize virtual
machines on IaaS Cloud like their own private servers. Regarding to
software patches of OS or middleware installed on virtual machines,
users need to adopt and verify these patches by themselves. This task
increases operation costs of users. Our proposed method replicates
user virtual environments, extracts verification test cases for user
virtual environments from test case DB, distributes patches to virtual
machines on replicated environments and conducts those test cases
automatically on replicated environments. We have implemented the
proposed method on OpenStack using Jenkins and confirmed the
feasibility. Using the implementation, we confirmed the effectiveness
of test case creation efforts by our proposed idea of 2-tier abstraction
of software functions and test cases. We also evaluated the automatic
verification performance of environment replications, test cases
extractions and test cases conductions.