Numerical Simulation of the Dynamic Behavior of a LaNi5 Water Pumping System

Metal hydride water pumping system uses hydrogen as working fluid to pump water for low head and high discharge. The principal operation of this pump is based on the desorption of hydrogen at high pressure and its absorption at low pressure by a metal hydride. This work is devoted to study a concept of the dynamic behavior of a metal hydride pump using unsteady model and LaNi5 as hydriding alloy. This study shows that with MHP, it is possible to pump 340l/kg-cycle of water in 15 000s using 1 Kg of LaNi5 at a desorption temperature of 360 K, a pumping head equal to 5 m and a desorption gear ratio equal to 33. This study reveals also that the error given by the steady model, using LaNi5 is about 2%.A dimensional mathematical model and the governing equations of the pump were presented to predict the coupled heat and mass transfer within the MHP. Then, a numerical simulation is carried out to present the time evolution of the specific water discharge and to test the effect of different parameters (desorption temperature, absorption temperature, desorption gear ratio) on the performance of the water pumping system (specific water discharge, pumping efficiency and pumping time). In addition, a comparison between results obtained with steady and unsteady model is performed with different hydride mass. Finally, a geometric configuration of the reactor is simulated to optimize the pumping time.

Turing Pattern in the Oregonator Revisited

In this paper, we reconsider the analysis of the Oregonator model. We highlight an error in this analysis which leads to an incorrect depiction of the parameter region in which diffusion driven instability is possible. We believe that the cause of the oversight is the complexity of stability analyses based on eigenvalues and the dependence on parameters of matrix minors appearing in stability calculations. We regenerate the parameter space where Turing patterns can be seen, and we use the common Lyapunov function (CLF) approach, which is numerically reliable, to further confirm the dependence of the results on diffusion coefficients intensities.

Soil Stress State under Tractive Tire and Compaction Model

Soil compaction induced by a tractor towing trailer becomes a major problem associated to sugarcane productivity. Soil beneath the tractor’s tire is not only under compressing stress but also shearing stress. Therefore, in order to help to understand such effects on soil, this research aimed to determine stress state in soil and predict compaction of soil under a tractive tire. The octahedral stress ratios under the tires were higher than one and much higher under higher draft forces. Moreover, the ratio was increasing with increase of number of tire’s passage. Soil compaction model was developed using data acquired from triaxial tests. The model was then used to predict soil bulk density under tractive tire. The maximum error was about 4% at 15 cm depth under lower draft force and tended to increase with depth and draft force. At depth of 30 cm and under higher draft force, the maximum error was about 16%.

From Victim to Ethical Agent: Oscar Wilde's The Ballad of Reading Gaol as Post-Traumatic Writing

Faced with a sudden, unexpected, and overwhelming event, the individual's normal cognitive processing may cease to function, trapping the psyche in "speechless terror", while images, feelings and sensations are experienced with emotional intensity. Unable to master such situation, the individual becomes a trauma victim who will be susceptible to traumatic recollections like intrusive thoughts, flashbacks, and repetitive re-living of the primal event in a way that blurs the distinction between past and present, and forecloses the future. Trauma is timeless, repetitious, and contagious; a trauma observer could fall prey to "secondary victimhood". Central to the process of healing the psychic wounds in the aftermath of trauma is verbalizing the traumatic experience (i.e., putting it into words) – an act which provides a chance for assimilation, testimony, and reevaluation. In light of this paradigm, this paper proposes a reading of Oscar Wilde's The Ballad of Reading Gaol, written shortly after his release from prison, as a post-traumatic text which traces the disruptive effects of the traumatic experience of Wilde's imprisonment for homosexual offences and the ensuing reversal of fortune he endured. Post-traumatic writing demonstrates the process of "working through" a trauma which may lead to the possibility of ethical agency in the form of a "survivor mission". This paper draws on fundamental concepts and key insights in literary trauma theory which is characterized by interdisciplinarity, combining the perspectives of different fields like critical theory, psychology, psychiatry, psychoanalysis, history, and social studies. Of particular relevance to this paper are the concepts of "vicarious traumatization" and "survivor mission", as The Ballad of Reading Gaol was written in response to Wilde's own prison trauma and the indirect traumatization he experienced as a result of witnessing the execution of a fellow prisoner whose story forms the narrative base of the poem. The Ballad displays Wilde's sense of mission which leads him to recognize the social as well as ethical implications of personal tragedy. Through a close textual analysis of The Ballad of Reading Gaol within the framework of literary trauma theory, the paper aims to: (a) demonstrate how the poem's thematic concerns, structure and rhetorical figures reflect the structure of trauma; (b) highlight Wilde's attempts to come to terms with the effects of the cataclysmic experience which transformed him into a social outcast; and (c) show how Wilde manages to transcend the victim status and assumes the role of ethical agent to voice a critique of the Victorian penal system and the standards of morality underlying the cruelties practiced against wrong doers and to solicit social action.

An Immersive Serious Game for Firefighting and Evacuation Training in Healthcare Facilities

In healthcare facilities, training the staff for firefighting and evacuation in real buildings is very challenging due to the presence of a vulnerable population in such an environment. In a standard environment, traditional approaches, such as fire drills, are often used to train the occupants and provide them with information about fire safety procedures. However, those traditional approaches may be inappropriate for a vulnerable population and can be inefficient from an educational viewpoint as it is impossible to expose the occupants to scenarios similar to a real emergency. Immersive serious games could be used as an alternative to traditional approaches to overcome their limitations. Serious games are already being used in different safety domains such as fires, earthquakes and terror attacks for several building types (e.g., office buildings, train stations, tunnels, etc.). In this study, we developed an immersive serious game to improve the fire safety skills of staff in healthcare facilities. An accurate representation of the healthcare environment was built in Unity3D by including visual and audio stimuli inspired from those employed in commercial action games. The serious game is organised in three levels. In each of them, the trainee is presented with a specific fire emergency and s/he can perform protective actions (e.g., firefighting, helping non-ambulant occupants, etc.) or s/he can ignore the opportunity for action and continue the evacuation. In this paper, we describe all the steps required to develop such a prototype, as well as the key questions that need to be answered, to develop a serious game for firefighting and evacuation in healthcare facilities.

Wavelet-Based ECG Signal Analysis and Classification

This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm.  The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.

Automatic Generation Control Design Based on Full State Vector Feedback for a Multi-Area Energy System Connected via Parallel AC/DC Lines

This article presents the design of optimal automatic generation control (AGC) based on full state feedback control for a multi-area interconnected power system. An extra high voltage AC transmission line in parallel with a high voltage DC link is considered as an area interconnection between the areas. The optimal AGC are designed and implemented in the wake of 1% load perturbation in one of the areas and the system dynamic response plots for various system states are obtained to investigate the system dynamic performance. The pattern of closed-loop eigenvalues are also determined to analyze the system stability. From the investigations carried out in the work, it is revealed that the dynamic performance of the system under consideration has an appreciable improvement when a high voltage DC line is paralleled with an extra high voltage AC line as an interconnection between the areas. The investigation of closed-loop eigenvalues reveals that the system stability is ensured in all case studies carried out with the designed optimal AGC.

Comparative Study of Different Enhancement Techniques for Computed Tomography Images

One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.

Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument

Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.

Forecasting the Volatility of Geophysical Time Series with Stochastic Volatility Models

This work is devoted to the study of modeling geophysical time series. A stochastic technique with time-varying parameters is used to forecast the volatility of data arising in geophysics. In this study, the volatility is defined as a logarithmic first-order autoregressive process. We observe that the inclusion of log-volatility into the time-varying parameter estimation significantly improves forecasting which is facilitated via maximum likelihood estimation. This allows us to conclude that the estimation algorithm for the corresponding one-step-ahead suggested volatility (with ±2 standard prediction errors) is very feasible since it possesses good convergence properties.

Nonlinear Estimation Model for Rail Track Deterioration

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Mathematical Modeling of the Working Principle of Gravity Gradient Instrument

Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.

Multilayer Neural Network and Fuzzy Logic Based Software Quality Prediction

In the software development lifecycle, the quality prediction techniques hold a prime importance in order to minimize future design errors and expensive maintenance. There are many techniques proposed by various researchers, but with the increasing complexity of the software lifecycle model, it is crucial to develop a flexible system which can cater for the factors which in result have an impact on the quality of the end product. These factors include properties of the software development process and the product along with its operation conditions. In this paper, a neural network (perceptron) based software quality prediction technique is proposed. Using this technique, the stakeholders can predict the quality of the resulting software during the early phases of the lifecycle saving time and resources on future elimination of design errors and costly maintenance. This technique can be brought into practical use using successful training.

Analysis of Simple Mechanisms to Continuously Vary Mach Number in a Supersonic Wind Tunnel Facility

Supersonic wind tunnel nozzles are generally capable of producing a constant Mach number flow in the test section of the wind tunnel. As a result, most of the supersonic vehicles are widely designed using steady state flow characteristics which may have errors while facing unsteady situations. This study aims to explore the possibility of varying the Mach number of the flow during wind tunnel operation. The nozzle walls are restricted to be inflexible for cooling near the throat due to high stagnation temperature requirement of the flow to simulate the conditions as experienced by the vehicle. Two simple independent mechanisms, rotation and translation of nozzle walls have been analyzed and the nozzle ranges have been optimized to vary the Mach number from Mach 2 to Mach 5 using minimum number of nozzles in the wind tunnel.

Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking

In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations.

High-Value Health System for All: Technologies for Promoting Health Education and Awareness

Health for all is considered as a sign of well-being and inclusive growth. New healthcare technologies are contributing to the quality of human lives by promoting health education and awareness, leading to the prevention, early diagnosis and treatment of the symptoms of diseases. Healthcare technologies have now migrated from the medical and institutionalized settings to the home and everyday life. This paper explores these new technologies and investigates how they contribute to health education and awareness, promoting the objective of high-value health system for all. The methodology used for the research is literature review. The paper also discusses the opportunities and challenges with futuristic healthcare technologies. The combined advances in genomics medicine, wearables and the IoT with enhanced data collection in electronic health record (EHR) systems, environmental sensors, and mobile device applications can contribute in a big way to high-value health system for all. The promise by these technologies includes reduced total cost of healthcare, reduced incidence of medical diagnosis errors, and reduced treatment variability. The major barriers to adoption include concerns with security, privacy, and integrity of healthcare data, regulation and compliance issues, service reliability, interoperability and portability of data, and user friendliness and convenience of these technologies.

Evaluating the Nexus between Energy Demand and Economic Growth Using the VECM Approach: Case Study of Nigeria, China, and the United States

The effectiveness of energy demand policy depends on identifying the key drivers of energy demand both in the short-run and the long-run. This paper examines the influence of regional differences on the link between energy demand and other explanatory variables for Nigeria, China and USA using the Vector Error Correction Model (VECM) approach. This study employed annual time series data on energy consumption (ED), real gross domestic product (GDP) per capita (RGDP), real energy prices (P) and urbanization (N) for a thirty-six-year sample period. The utilized time-series data are sourced from World Bank’s World Development Indicators (WDI, 2016) and US Energy Information Administration (EIA). Results from the study, shows that all the independent variables (income, urbanization, and price) substantially affect the long-run energy consumption in Nigeria, USA and China, whereas, income has no significant effect on short-run energy demand in USA and Nigeria. In addition, the long-run effect of urbanization is relatively stronger in China. Urbanization is a key factor in energy demand, it therefore recommended that more attention should be given to the development of rural communities to reduce the inflow of migrants into urban communities which causes the increase in energy demand and energy excesses should be penalized while energy management should be incentivized.

Ice Load Measurements on Known Structures Using Image Processing Methods

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application

On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.