Regional Low Gravity Anomalies Influencing High Concentrations of Heavy Minerals on Placer Deposits

Regions of low gravity and gravity anomalies both influence heavy mineral concentrations on placer deposits. Economically imported heavy minerals are likely to have higher levels of deposition in low gravity regions of placer deposits. This can be found in coastal regions of Southern Asia, particularly in Sri Lanka and Peninsula India and areas located in the lowest gravity region of the world. The area about 70 kilometers of the east coast of Sri Lanka is covered by a high percentage of ilmenite deposits, and the southwest coast of the island consists of Monazite placer deposit. These deposits are one of the largest placer deposits in the world. In India, the heavy mineral industry has a good market. On the other hand, based on the coastal placer deposits recorded, the high gravity region located around Papua New Guinea, has no such heavy mineral deposits. In low gravity regions, with the help of other depositional environmental factors, the grains have more time and space to float in the sea, this helps bring high concentrations of heavy mineral deposits to the coast. The effect of low and high gravity can be demonstrated by using heavy mineral separation devices.  The Wilfley heavy mineral separating table is one of these; it is extensively used in industries and in laboratories for heavy mineral separation. The horizontally oscillating Wilfley table helps to separate heavy and light mineral grains in to deferent fractions, with the use of water. In this experiment, the low and high angle of the Wilfley table are representing low and high gravity respectively. A sample mixture of grain size

Assessment of Drug Delivery Systems from Molecular Dynamic Perspective

In this study, we developed and simulated nano-drug delivery systems efficacy in compare to free drug prescription. Computational models can be utilized to accelerate experimental steps and control the experiments high cost. Molecular dynamics simulation (MDS), in particular NAMD was utilized to better understand the anti-cancer drug interaction with cell membrane model. Paclitaxel (PTX) and dipalmitoylphosphatidylcholine (DPPC) were selected for the drug molecule and as a natural phospholipid nanocarrier, respectively. This work focused on two important interaction parameters between molecules in terms of center of mass (COM) and van der Waals interaction energy. Furthermore, we compared the simulation results of the PTX interaction with the cell membrane and the interaction of DPPC as a nanocarrier loaded by the drug with the cell membrane. The molecular dynamic analysis resulted in low energy between the nanocarrier and the cell membrane as well as significant decrease of COM amount in the nanocarrier and the cell membrane system during the interaction. Thus, the drug vehicle showed notably better interaction with the cell membrane in compared to free drug interaction with the cell membrane.

A Green Method for Selective Spectrophotometric Determination of Hafnium(IV) with Aqueous Extract of Ficus carica Tree Leaves

A clean spectrophotometric method for the determination of hafnium by using a green reagent, acidic extract of Ficus carica tree leaves is developed. In 6-M hydrochloric acid, hafnium reacts with this reagent to form a yellow product. The formed product shows maximum absorbance at 421 nm with a molar absorptivity value of 0.28 × 104 l mol⁻¹ cm⁻¹, and the method was linear in the 2-11 µg ml⁻¹ concentration range. The detection limit value was found to be 0.312 µg ml⁻¹. Except zirconium and iron, the selectivity was good, and most of the ions did not show any significant spectral interference at concentrations up to several hundred times. The proposed method was green, simple, low cost, and selective.

Similarity Based Membership of Elements to Uncertain Concept in Information System

The process of determining the degree of membership for an element to an uncertain concept has been found in many ways, using equivalence and symmetry relations in information systems. In the case of similarity, these methods did not take into account the degree of symmetry between elements. In this paper, we use a new definition for finding the membership based on the degree of symmetry. We provide an example to clarify the suggested methods and compare it with previous methods. This method opens the door to more accurate decisions in information systems.

Using Game Engines in Lightning Shielding: The Application of the Rolling Spheres Method on Virtual As-Built Power Substations

Lightning strikes can cause severe negative impacts to the electrical sector causing direct damage to equipment as well as shutdowns, especially when occurring in power substations. In order to mitigate this problem, a meticulous planning of the power substation protection system is of vital importance. A critical part of this is the distribution of shielding wires through the substation, which creates a 3D imaginary protection mesh similar to a circus tarpaulin. Equipment enclosed in the volume defined by that 3D mesh is considered protected against lightning strikes. The use of traditional methods of longitudinal cutting analysis based on 2D CAD tools makes the process laborious and the results obtained may not guarantee satisfactory protection of electrical equipment. This work describes the application of a Game Engine to the problem of lightning protection of power substations providing the visualization of the 3D protection mesh, the amount of protected components and the highlight of equipment which remain unprotected. In addition, aspects regarding the implementation and the advantages of approaching the problem using Unreal® Engine 4 are described. In order to validate results, a comparison with traditional 2D methods is applied to the same case study to which the proposed technique has been applied. Finally, a comparative study involving different levels of protection using the technique developed in this work is presented, showing that modern game engines can be a powerful accessory for simulations in several areas of engineering.

Hybrid Reliability-Similarity-Based Approach for Supervised Machine Learning

Data mining has, over recent years, seen big advances because of the spread of internet, which generates everyday a tremendous volume of data, and also the immense advances in technologies which facilitate the analysis of these data. In particular, classification techniques are a subdomain of Data Mining which determines in which group each data instance is related within a given dataset. It is used to classify data into different classes according to desired criteria. Generally, a classification technique is either statistical or machine learning. Each type of these techniques has its own limits. Nowadays, current data are becoming increasingly heterogeneous; consequently, current classification techniques are encountering many difficulties. This paper defines new measure functions to quantify the resemblance between instances and then combines them in a new approach which is different from actual algorithms by its reliability computations. Results of the proposed approach exceeded most common classification techniques with an f-measure exceeding 97% on the IRIS Dataset.

Hardware-in-the-Loop Test for Automatic Voltage Regulator of Synchronous Condenser

Automatic voltage regulator (AVR) plays an important role in volt/var control of synchronous condenser (SC) in power systems. Test AVR performance in steady-state and dynamic conditions in real grid is expensive, low efficiency, and hard to achieve. To address this issue, we implement hardware-in-the-loop (HiL) test for the AVR of SC to test the steady-state and dynamic performances of AVR in different operating conditions. Startup procedure of the system and voltage set point changes are studied to evaluate the AVR hardware response. Overexcitation, underexcitation, and AVR set point loss are tested to compare the performance of SC with the AVR hardware and that of simulation. The comparative results demonstrate how AVR will work in a real system. The results show HiL test is an effective approach for testing devices before deployment and is able to parameterize the controller with lower cost, higher efficiency, and more flexibility.

Optimization Study of Adsorption of Nickel(II) on Bentonite

This work concerns with the experimental study of the adsorption of the Ni(II) on bentonite. The effects of various parameters such as contact time, stirring rate, initial concentration of Ni(II), masse of clay, initial pH of aqueous solution and temperature on the adsorption yield, were carried out. The study of the effect of the ionic strength on the yield of adsorption was examined by the identification and the quantification of the present chemical species in the aqueous phase containing the metallic ion Ni(II). The adsorbed species were investigated by a calculation program using CHEAQS V. L20.1 in order to determine the relation between the percentages of the adsorbed species and the adsorption yield. The optimization process was carried out using 23 factorial designs. The individual and combined effects of three process parameters, i.e. initial Ni(II) concentration in aqueous solution (2.10−3 and 5.10−3 mol/L), initial pH of the solution (2 and 6.5), and mass of bentonite (0.03 and 0.3 g) on Ni(II) adsorption, were studied.

Using Music in the Classroom to Help Syrian Refugees Deal with Post-War Trauma

Millions of Syrian families have been displaced since the beginning of the Syrian war, and the negative effects of post-war trauma have shown detrimental effects on the mental health of refugee children. While educational strategies have focused on vocational training and academic achievement, little has been done to include music in the school curriculum to help these children improve their mental health. The literature of music education and psychology, on the other hand, shows the positive effects of music on traumatized children, especially when it comes to dealing with stress. This paper presents a brief literature review of trauma, music therapy, and music in the classroom, after having introduced the Syrian war and refugee situation. Furthermore, the paper highlights the benefits of using music with traumatized children from the literature and offers strategies for teachers (such as singing, playing an instrument, songwriting, and others) to include music in their classrooms to help Syrian refugee children deal with post-war trauma.

Texture Observation of Bending by XRD and EBSD Method

The crystal orientation is a factor that affects the microscopic material properties. Crystal orientation determines the anisotropy of the polycrystalline material. And it is closely related to the mechanical properties of the material. In this paper, for pure copper polycrystalline material, two different methods; X-Ray Diffraction (XRD) and Electron Backscatter Diffraction (EBSD); and the crystal orientation were analyzed. In the latter method, it is possible that the X-ray beam diameter is thicker as compared to the former, to measure the crystal orientation macroscopically relatively. By measurement of the above, we investigated the change in crystal orientation and internal tissues of pure copper.

SEM-EBSD Observation for Microtubes by Using Dieless Drawing Process

Because die drawing requires insertion of a die, a plug, or a mandrel, higher precision and efficiency are demanded for drawing equipment for a tube having smaller diameter. Manufacturing of such tubes is also accompanied by problems such as cracking and fracture. We specifically examine dieless drawing, which is less affected by these drawing-related difficulties. This deformation process is governed by a similar principle to that of reduction in diameter when pulling a heated glass tube. We conducted dieless drawing of SUS304 stainless steel microtubes under various conditions with three factor parameters of heating temperature, area reduction, and drawing speed. We used SEM-EBSD to observe the processing condition effects on microstructural elements. As the result of this study, crystallographic orientation of microtube is clear by using SEM-EBSD analysis.

Fuzzy Analytic Hierarchy Process for Determination of Supply Chain Performance Evaluation Criteria

Fuzzy AHP (Analytic Hierarchy Process) method is decision-making way at the end of integrating the current AHP method with fuzzy structure. In this study, the processes of production planning, inventory management and purchasing department of a system were analysed and were requested to decide the performance criteria of each area. At this point, the current work processes were analysed by various decision-makers and comparing each criteria by giving points according to 1-9 scale were completed. The criteria were listed in order to their weights by using Fuzzy AHP approach and top three performance criteria of each department were determined. After that, the performance criteria of supply chain consisting of three departments were asked to determine. The processes of each department were compared by decision-makers at the point of building the supply chain performance system and getting the performance criteria. According to the results, the criteria of performance system of supply chain by using Fuzzy AHP were determined for which will be used in the supply chain performance system in the future.

Tolerance and Perspective towards Disability: A Mixed Methods Study

Society has a lot of diversities according to sex, age, religion, abilities or disabilities, education, etc. According to differences, everybody needs to be tolerated and equally included in society. In order to provide quality inclusion, society needs to tolerate differences. This study relates to the differences in disability. To examine tolerance towards disability and inclusion, this study was conducted with students attending regular elementary and high school. The main goal was to examine their attitudes towards their classmates and elderly people with disabilities. The study begins with the hypothesis that the environment has a highly developed tolerance towards people with disabilities, regardless of age. The sample was divided according to tasks and methodology analysis. Students attending regular elementary school were asked to make drawings of their classmates with disabilities. The drawings were analyzed using quantitative methodology according to the colors children used and the position of character on the paper. Students attending high school and members of general population were asked to complete a questionnaire designed for this study during a workshop held on the International Day for Tolerance. Responses were analyzed using qualitative methodology. The hypothesis was confirmed.

A Review on the Outlook of the Circular Economy in the Automotive Industry

The relationship of the automotive industry with raw material supply is a major challenge and presents obstacles. Automobiles are ones of the most complex products using a large variety of materials. Safety, eco-friendliness and comfort requirements, physical, chemical and economic limitations set the framework in which this industry continuously optimizes the efficient and responsible use of resources. The concept of circular economy covers the issues of waste generation, resource scarcity and economic advantages. However, circularity is already known for the automobile industry – several efforts are done to foster material reuse, product remanufacturing and recycling. The aim of this study is to give an overview on how the producers comply with the growing demands on one hand, and gain efficiency and increase profitability on the other hand from circular economy.

Energy-Aware Routing in Mobile Wireless Sensor Networks

Wireless sensor networks are resource constrained networks, where energy is the major resource in such networks. Therefore, energy conservation is major aspect in the deployment of Wireless Sensor Network. This work makes use of an extended Greedy Perimeter Stateless Routing (eGPSR) protocol that mainly focuses on energy efficient data transmission. This data transmission is based on the fact that the message that is sent to a distant node consumes more energy than the message that is sent to a short range transmission. Every cluster contains a head set that consists of many virtual cluster heads. Routing is decided by head set members. The energy level of the received signal is the major constraint to choose head set from its members. The experimental result shows that the use of eGPSR in routing has improved throughput with comparatively less delay.

Object Tracking in Motion Blurred Images with Adaptive Mean Shift and Wavelet Feature

A method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But, the problem is that mean shift could not track the selected object accurately in blurred scenes. So, for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature, we should use Harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking.

Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration

The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.

Contribution to the Success of the Energy Audit in the Industrial Environment: A Case Study about Audit of Interior Lighting for an Industrial Site in Morocco

The energy audit is the essential initial step to ensure a good definition of energy control actions. The in-depth study of the various energy-consuming equipments makes it possible to determine the actions and investments with best cost for the company. The analysis focuses on the energy consumption of production equipment and utilities (lighting, heating, air conditioning, ventilation, transport). Successful implementation of this approach requires, however, to take into account a number of prerequisites. This paper proposes a number of useful recommendations concerning the energy audit in order to achieve better results, and a case study concerning the lighting audit of a Moroccan company by showing the gains that can be made through this audit.

Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.