Microstructural Evolution of an Interface Region in a Nickel-Based Superalloy Joint Produced by Direct Energy Deposition

Microstructure analysis of additively manufactured (AM) materials is an important step in understanding the interrelationship between mechanical properties and materials performance. Literature on the effect of a laser-based AM process parameters on the microstructure in the substrate-deposit interface is limited. The interface region, the adjoining area of substrate and deposit, is characterized by the presence of the fusion zone (FZ) and heat affected zone (HAZ) experiencing rapid thermal gyrations resulting in thermal induced transformations. Inconel 718 was utilized as a work material for both the substrate and deposit. Three blocks of Inconel 718 material were deposited by Direct Energy Deposition (DED) using three different laser powers, 550W, 750W and 950W, respectively. A coupled thermo-mechanical transient approach was utilized to correlate temperature history to the evolution of microstructure. Thermal history of the deposition process was monitored with the thermocouples installed inside the substrate material. Interface region of the blocks were analysed with Optical Microscopy (OM) and Scanning Electron Microscopy (SEM) including electron back-scattered diffraction (EBSD) technique. Laser power was found to influence the dissolution of intermetallic precipitated phases in the substrate and grain growth in the interface region. Microstructure and thermal history data were utilized to draw conclusive comparisons between the investigated process parameters.

A Multigranular Linguistic Additive Ratio Assessment Model in Group Decision Making

Most of the multi-criteria group decision making (MCGDM) problems dealing with qualitative criteria require consideration of the large background of expert information. It is common that experts have different degrees of knowledge for giving their alternative assessments according to criteria. So, it seems logical that they use different evaluation scales to express their judgment, i.e., multi granular linguistic scales. In this context, we propose the extension of the classical additive ratio assessment (ARAS) method to the case of a hierarchical linguistics term for managing multi granular linguistic scales in uncertain context where uncertainty is modeled by means in linguistic information. The proposed approach is called the extended hierarchical linguistics-ARAS method (ELH-ARAS). Within the ELH-ARAS approach, the decision maker (DMs) can diagnose the results (the ranking of the alternatives) in a decomposed style i.e., not only at one level of the hierarchy but also at the intermediate ones. Also, the developed approach allows a feedback transformation i.e., the collective final results of all experts are able to be transformed at any level of the extended linguistic hierarchy that each expert has previously used. Therefore, the ELH-ARAS technique makes it easier for decision-makers to understand the results. Finally, an MCGDM case study is given to illustrate the proposed approach.

Topic Modeling Using Latent Dirichlet Allocation and Latent Semantic Indexing on South African Telco Twitter Data

Twitter is one of the most popular social media platforms where users share their opinions on different subjects. Twitter can be considered a great source for mining text due to the high volumes of data generated through the platform daily. Many industries such as telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model in this experiment. A higher topic coherence score indicates better performance of the model.

Un Pavillon – Un Monument: The Modern Palace and the Case of the U.S. Embassy in Karachi, Pakistan (1955–59)

This paper investigates civic representation in mid-century diplomatic buildings through the case of the U.S. Embassy in Karachi (1955-59), Pakistan, designed by the Austrian-American architect Richard Neutra (1892-1970) and the American architect Robert Alexander (1907-92). Texts, magazines, and oral histories at that time highlighted the need for a new postwar expression of American governmental architecture, leaning toward modernization, technology, and monumentality. Descriptive, structural, and historical analyses of the U.S. Embassy in Karachi revealed the emergence of a new prototypical solution for postwar diplomatic buildings: the combination of one main orthogonal block, seen as a modern-day corps de logis, and a flanking arcuated pavilion, often organized in one or two stories. Although the U.S. Embassy relied on highly industrialized techniques and abstract images of social progress, archival work at the Neutra’s archives at the University of California, Los Angeles, revealed that much of this project was adapted to vernacular elements and traditional forms—such as the intriguing use of reinforced concrete barrel vaults.

Forecasting 24-Hour Ahead Electricity Load Using Time Series Models

Forecasting electricity load is important for various purposes like planning, operation and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria namely, the Mean Absolute Error and Root Mean Square Error. The National Renewable Energy Laboratory (NREL) residential energy consumption data are used to train the models. The results of this study show that SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts we can improve the robustness of the models for 24 hour ahead electricity load forecasting.

Zinc Oxide Nanoparticles Modified with Galactose as Potential Drug Carrier with Reduced Releasing of Zinc Ions

The toxicity of bare zinc oxide nanoparticles used as drug carriers may be the result of releasing zinc ions. Thus, zinc oxide nanoparticles modified with galactose were obtained. The process of their formation was conducted in the microwave field. The physicochemical properties of the obtained products were studied. The size and electrokinetic potential were defined by using dynamic light scattering technique. The crystalline properties were assessed by X-ray diffractometry. In order to confirm the formation of the desired products, Fourier-transform infrared spectroscopy was used. Releasing of zinc ions from the prepared products when comparing to the bare oxide was analyzed. It was found out that modification of zinc oxide nanoparticles with galactose limits the releasing of zinc ions which are responsible for the toxic effect of the whole carrier-drug conjugate.

Application of Molecular Materials in the Manufacture of Flexible and Organic Devices for Photovoltaic Applications

Many sustainable approaches to generate electric energy have emerged in the last few decades; one of them is through solar cells. Yet, this also has the disadvantage of highly polluting inorganic semiconductor manufacturing processes. Therefore, the use of molecular semiconductors must be considered. In this work, allene compounds C24H26O4 and C24H26O5 were used as dopants to manufacture semiconductor films based on PbPc by high-vacuum evaporation technique. IR spectroscopy was carried out to determine the phase and any significant chemical changes which may occur during the thermal evaporation. According to UV-visible spectroscopy and Tauc’s model, the deposition process generated thin films with an activation energy range of 1.47 eV to 1.55 eV for direct transitions and 1.29 eV to 1.33 eV for indirect transitions. These values place the manufactured films within the range of low bandgap semiconductors. The flexible devices were manufactured: polyethylene terephthalate (PET), Indium tin oxide (ITO)/organic semiconductor/Cubic Close Packed (CCP). The characterization of the devices was carried out by evaluating electrical conductivity using the four-probe collinear method. I-V curves were obtained under different lighting conditions at room temperature. OS1 (PbPc/C24H26O4) showed an Ohmic behavior, while OS2 (PbPc/C24H26O5) reached higher current values at lower voltages. The results obtained show that the semiconductor devices doped with allene compounds can be used in the manufacture of optoelectronic devices.

Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Creating favourable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. This paper describes a holistic approach to teaching mathematics designed to address the primary challenges of such teaching; specifically, the challenge of students’ comprehension. Essentially, this approach consists of (1) establishing links between the attributes of the notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience—value-based emotions, contextual, procedural and communicative—during the educational process; (3) linking together different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modelling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and techniques influence understanding of material used in teaching mathematics was the primary goal. The study included an experiment in which 256 secondary school students took part: 142 in the study group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics — “Derivative” and “Trigonometric functions”—was evaluated. Control group participants were taught using traditional methods. Students in the study group were taught using the holistic method: under teacher’s guidance, they carried out assignments designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as assignments that required the ability to operate with all modes of presentation. Identification, accounting for and transformation of subjective experience were associated with methods of stimulating the emotional value component of the studied mathematical content (discussions of lesson titles, assignments aimed to create study dominants, performing theme-related physical exercise ...) The use of techniques that forms inter-subject notions based on linkages between, perceptual real and mathematical conceptual spaces proved to be of special interest to the students. Results of the experiment were analysed by presenting students in each of the groups with a final test in each of the studied topics. The test included assignments that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion x2 was used to reveal statistics significance of results (pass-fail the modelling test). Significant difference of results was revealed (p < 0.001), which allowed to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. The total number of completed assignments of each student was analysed as well, with average results calculated for each group. Statistical significance of result differences against the quantitative criterion (number of completed assignments) was determined using Student’s t-test, which showed that students in the study group completed significantly more assignments than those in the control group (p = 0.0001). Authors thus come to the conclusion that suggested increase in the level of comprehension of study material took place as a result of applying implemented methods and techniques.

The Analysis of Different Classes of Weighted Fuzzy Petri Nets and Their Features

This paper presents the analysis of six different classes of Petri nets: fuzzy Petri nets (FPN), generalized fuzzy Petri nets (GFPN), parameterized fuzzy Petri nets (PFPN), T2GFPN, flexible generalized fuzzy Petri nets (FGFPN), binary Petri nets (BPN). These classes were simulated in the special software PNeS® for the analysis of its pros and cons on the example of models which are dedicated to the decision-making process of passenger transport logistics. The paper includes the analysis of two approaches: when input values are filled with the experts’ knowledge; when fuzzy expectations represented by output values are added to the point. These approaches fulfill the possibilities of triples of functions which are replaced with different combinations of t-/s-norms.

Optimizing Data Evaluation Metrics for Fraud Detection Using Machine Learning

The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate others. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease these advancements. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent datasets, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which split and technique would lead to the most optimal results.

The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

The Possibility of Solving a 3x3 Rubik’s Cube under 3 Seconds

Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing including turns per second (tps), algorithm, finger trick, and hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement technique. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.

Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network

Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.

A Web-Based Mobile System for Promoting Agribusiness in Northern Nigeria

This research aimed at developing a web-based mobile system and figuring out a better understanding of how could “web-based mobile system supports farmers in Kebbi State”. Thus, by finding out the answers to the research questions, a conceptual framework of the entire system was implemented using Unified Modelling Language (UML). The work involved a review of existing research on web-based mobile technology for farmers in some countries and other geographical areas within Nigeria. This research explored how farmers in Northern Nigeria, especially in Kebbi state, make use of the web-based mobile system for agribusiness. Also, the benefits of using web-based mobile systems and the challenges farmers face using such systems were examined. Considering the dynamic nature of theory of information and communication technology; this research employed survey and focus group discussion (FGD) methods. Stratified, random, purposive, and convenience sampling techniques were adopted to select the sample. A questionnaire and FGD guide were used to collect data. The survey finds that most of the Kebbi state farms use their alternative medium to get relevant information for their agribusiness. Also, the research reveals that using a web-based mobile system can benefit farmers significantly. Finally, the study has successfully developed and implemented the proposed system using mobile technology in addition to the framework design.

Mechanical Behavior of Recycled Mortars Manufactured from Moisture Correction Using the Halogen Light Thermogravimetric Balance as an Alternative to the Traditional ASTM C 128 Method

To obtain high mechanical performance, the fresh conditions of a mortar are decisive. Measuring the absorption of aggregates used in mortar mixes is a fundamental requirement for proper design of the mixes prior to their placement in construction sites. In this sense, absorption is a determining factor in the design of a mix because it conditions the amount of water, which in turn affects the water/cement ratio and the final porosity of the mortar. Thus, this work focuses on the mechanical behavior of recycled mortars manufactured from moisture correction using the Thermogravimetric Balancing Halogen Light (TBHL) technique in comparison with the traditional ASTM C 128 International Standard method. The advantages of using the TBHL technique are favorable in terms of reduced consumption of resources such as materials, energy and time. The results show that in contrast to the ASTM C 128 method, the TBHL alternative technique allows obtaining a higher precision in the absorption values of recycled aggregates, which is reflected not only in a more efficient process in terms of sustainability in the characterization of construction materials, but also in an effect on the mechanical performance of recycled mortars.

Design and Analysis of Low-Power, High Speed and Area Efficient 2-Bit Digital Magnitude Comparator in 90nm CMOS Technology Using Gate Diffusion Input

Digital magnitude comparators based on Gate Diffusion Input (GDI) implementation technique are high speed and area-efficient, and they consume less power as compared to other implementation techniques. However, they are less efficient for some logic gates and have no full voltage swing. In this paper, we made a performance comparison between the GDI implementation technique and other implementation methods, such as Static CMOS, Pass Transistor Logic (PTL), and Transmission Gate (TG) in 90 nm, 120 nm, and 180 nm CMOS technologies using BSIM4 MOS model. We proposed a methodology (hybrid implementation) of implementing digital magnitude comparators which significantly improved the power, speed, area, and voltage swing requirements. Simulation results revealed that the hybrid implementation of digital magnitude comparators show a 10.84% (power dissipation), 41.6% (propagation delay), 47.95% (power-delay product (PDP)) improvement compared to the usual GDI implementation method. We used Microwind & Dsch Version 3.5 as well as the Tanner EDA 16.0 tools for simulation purposes.

Attitudes of Gratitude: An Analysis of 30 Cancer Narratives Published by Leading U.S. Cancer Care Centers

This study examines the ways in which cancer patient narratives are portrayed and framed on the websites of three leading U.S. cancer care centers – The University of Texas MD Anderson Cancer Center in Houston, Memorial Sloan Kettering Cancer Center in New York, and Seattle Cancer Care Alliance. Thirty patient stories, 10 from each cancer center website blog, were analyzed using qualitative and quantitative textual analysis of unstructured data, documenting common themes and other elements of story structure and content. Patient narratives were coded using grounded theory as the basis for conducting emergent qualitative research. As part of a systematic, inductive approach to collecting and analyzing data, recurrent and unique themes were examined and compared in terms of positive and negative framing, patient agency, and institutional praise. All three of these cancer care centers are teaching hospitals, with university affiliations, that emphasize an evidence-based scientific approach to treatment that utilizes the latest research and cutting-edge techniques and technology. The featured cancer stories suggest positive outcomes based on anecdotal narratives as opposed to the science-based treatment models employed by the cancer centers. An analysis of 30 sample stories found skewed representation of the “cancer experience” that emphasizes positive outcomes while minimizing or excluding more negative realities of cancer diagnosis and treatment. The stories also deemphasize patient agency, instead focusing on deference and gratitude toward the cancer care centers, which are cast in the role of savior.  

Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments

In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Vague Multiple Criteria Decision Making Analysis Method for Fighter Aircraft Selection

Fighter aircraft selection is one of the most critical strategies for defense multiple criteria decision-making analysis to increase the decisive power of air defense and its superior power in the defense strategy. Vague set theory is an adequate approach for modeling vagueness, uncertainty, and imprecision in decision-making problems. This study integrates vague set theory and the technique for order of preference by similarity to ideal solution (TOPSIS) to support fighter aircraft selection. The proposed method is applied in the selection of fighter aircraft for the Air Force. In the proposed approach, the ratings of alternatives and the importance weights of criteria for fighter aircraft selection are represented by the vague set theory. Finally, an illustrative example for fighter aircraft selection is given to demonstrate the applicability and effectiveness of the proposed approach. The fighter aircraft candidates were selected under six criteria including costability, payloadability, maneuverability, speedability, stealthility, and survivability. Analysis results show that the best fighter aircraft is selected with the highest closeness coefficient value. The proposed method can also be applied to solve other multiple criteria decision analysis problems. 

Block-Based 2D to 3D Image Conversion Method

With the advent of three-dimension (3D) technology, there are lots of research in converting 2D images to 3D images. The main difference between 2D and 3D is the visual illusion of depth in 3D images. In the recent era, there are more depth estimation techniques. The objective of this paper is to convert 2D images to 3D images with less computation time. For this, the input image is divided into blocks from which the depth information is obtained. Having the depth information, a depth map is generated. Then the 3D image is warped using the original image and the depth map. The proposed method is tested on Make3D dataset and NYU-V2 dataset. The experimental results are compared with other recent methods. The proposed method proved to work with less computation time and good accuracy.