Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II

To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.

A Multi-Feature Deep Learning Algorithm for Urban Traffic Classification with Limited Labeled Data

Acoustic sensors, if embedded in smart street lights, can help in capturing the activities (car honking, sirens, events, traffic, etc.) in cities. Needless to say, the acoustic data from such scenarios are complex due to multiple audio streams originating from different events, and when decomposed to independent signals, the amount of retrieved data volume is small in quantity which is inadequate to train deep neural networks. So, in this paper, we address the two challenges: a) separating the mixed signals, and b) developing an efficient acoustic classifier under data paucity. So, to address these challenges, we propose an architecture with supervised deep learning, where the initial captured mixed acoustics data are analyzed with Fast Fourier Transformation (FFT), followed by filtering the noise from the signal, and then decomposed to independent signals by fast independent component analysis (Fast ICA). To address the challenge of data paucity, we propose a multi feature-based deep neural network with high performance that is reflected in our experiments when compared to the conventional convolutional neural network (CNN) and multi-layer perceptron (MLP).

Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network

Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.

Numerical Evaluation of Turbulent Friction on Walls in the Penstock of the Trois-Gorges Dam by the Swamee-Jain Method

Since the expression of the coefficient of friction by Colebrook-White which turns out to be an implicit equation, equations have been developed to facilitate their applicability. In this work, this equation was applied to the penstock of the Three Gorges dam in order to observe the evolution of the turbulent boundary layer and the friction along the walls. Thus, the study is being carried out using a 3D digital approach in FLUENT in order to take into account the wall effects. It appears that according to the position of the portions, we have a variation in the evolutions of the turbulent friction and of the values of the boundary layer. We also observe that the inclination of the pipe has a significant influence on this turbulent friction; similarly, one could not make a fair evaluation of the latter without specifying the choice and location of the wall.

Mechanical Behavior of Recycled Mortars Manufactured from Moisture Correction Using the Halogen Light Thermogravimetric Balance as an Alternative to the Traditional ASTM C 128 Method

To obtain high mechanical performance, the fresh conditions of a mortar are decisive. Measuring the absorption of aggregates used in mortar mixes is a fundamental requirement for proper design of the mixes prior to their placement in construction sites. In this sense, absorption is a determining factor in the design of a mix because it conditions the amount of water, which in turn affects the water/cement ratio and the final porosity of the mortar. Thus, this work focuses on the mechanical behavior of recycled mortars manufactured from moisture correction using the Thermogravimetric Balancing Halogen Light (TBHL) technique in comparison with the traditional ASTM C 128 International Standard method. The advantages of using the TBHL technique are favorable in terms of reduced consumption of resources such as materials, energy and time. The results show that in contrast to the ASTM C 128 method, the TBHL alternative technique allows obtaining a higher precision in the absorption values of recycled aggregates, which is reflected not only in a more efficient process in terms of sustainability in the characterization of construction materials, but also in an effect on the mechanical performance of recycled mortars.

Design and Analysis of Low-Power, High Speed and Area Efficient 2-Bit Digital Magnitude Comparator in 90nm CMOS Technology Using Gate Diffusion Input

Digital magnitude comparators based on Gate Diffusion Input (GDI) implementation technique are high speed and area-efficient, and they consume less power as compared to other implementation techniques. However, they are less efficient for some logic gates and have no full voltage swing. In this paper, we made a performance comparison between the GDI implementation technique and other implementation methods, such as Static CMOS, Pass Transistor Logic (PTL), and Transmission Gate (TG) in 90 nm, 120 nm, and 180 nm CMOS technologies using BSIM4 MOS model. We proposed a methodology (hybrid implementation) of implementing digital magnitude comparators which significantly improved the power, speed, area, and voltage swing requirements. Simulation results revealed that the hybrid implementation of digital magnitude comparators show a 10.84% (power dissipation), 41.6% (propagation delay), 47.95% (power-delay product (PDP)) improvement compared to the usual GDI implementation method. We used Microwind & Dsch Version 3.5 as well as the Tanner EDA 16.0 tools for simulation purposes.

Efficient Alias-free Level Crossing Sampling

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide alias-free high-fidelity signal reconstruction for speech signals without exponentially increasing sample number with increasing bit-depth. We introduce methods in LC sampling that reduce the sampling rate close to the Nyquist frequency even for large bit-depth. The results indicate that larger variation in the sampling intervals leads to alias-free sampling scheme; this is achieved by either reducing the bit-depth or adding a jitter to the system for high bit-depths. In conjunction with windowing, the signal is reconstructed from the LC samples using an efficient Toeplitz reconstruction algorithm.

Vague Multiple Criteria Decision Making Analysis Method for Fighter Aircraft Selection

Fighter aircraft selection is one of the most critical strategies for defense multiple criteria decision-making analysis to increase the decisive power of air defense and its superior power in the defense strategy. Vague set theory is an adequate approach for modeling vagueness, uncertainty, and imprecision in decision-making problems. This study integrates vague set theory and the technique for order of preference by similarity to ideal solution (TOPSIS) to support fighter aircraft selection. The proposed method is applied in the selection of fighter aircraft for the Air Force. In the proposed approach, the ratings of alternatives and the importance weights of criteria for fighter aircraft selection are represented by the vague set theory. Finally, an illustrative example for fighter aircraft selection is given to demonstrate the applicability and effectiveness of the proposed approach. The fighter aircraft candidates were selected under six criteria including costability, payloadability, maneuverability, speedability, stealthility, and survivability. Analysis results show that the best fighter aircraft is selected with the highest closeness coefficient value. The proposed method can also be applied to solve other multiple criteria decision analysis problems. 

A Study of Agile-Based Approaches to Improve Software Quality

Agile Software development approaches and techniques are being considered as efficient, effective, and popular methods to the development of software. Agile software developments are useful for developing high-quality software that completes client requirements with zero defects, and in short delivery period. In agile software development methodology, quality is related to coding, which means quality, is managed through the use of approaches like refactoring, pair programming, test-driven development, behavior-driven development, acceptance test-driven development, and demand-driven development. The quality of software is measured using metrics like the number of defects during the development and improvement of the software. Usage of the above-mentioned methods or approaches reduces the possibilities of defects in developed software, and hence improves quality. This paper focuses on the study of agile-based quality methods or approaches for software development that ensures improved quality of software as well as reduced cost, and customer satisfaction.

Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals does not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.

A Study of Learning to Enhance Career Skills Consistent with Disruptive Innovation in the Creative Strategies for Advertising Course

This project is a study of learning activities of creating experience from actual work performance to enhance career skills and technological usage abilities for uses in advertising career work performance for undergraduate students who enroll in the Creative Strategies for Advertising Course. The instructional model consisted of two learning approaches: (1) simulation-based learning, which is the learning with the use of simulations of working in various sections of creative advertisement work with their own work process and steps as well as the virtual technology learning in advertising companies; and (2) project-based learning, which is the learning that the learners engage in actual work performance based on the process of creating and producing creative advertisement works to be present on new media channels. The results of learning management showed that the effects on the students in various aspects were as follows: (1) the students had experience in the advertising process at the higher level; and (2) the students had work performance skills from the actual work performance that enabled them to possess the abilities to create and present their own work; also, they had created more efficient work outcomes and disseminated them on new media channels at a better level.

Fuzzy Power Controller Design for Purdue University Research Reactor-1

The Purdue University Research Reactor-1 (PUR-1) is a 10 kWth pool-type research reactor located at Purdue University’s West Lafayette campus. The reactor was recently upgraded to use entirely digital instrumentation and control systems. However, currently, there is no automated control system to regulate the power in the reactor. We propose a fuzzy logic controller as a form of digital twin to complement the existing digital instrumentation system to monitor and stabilize power control using existing experimental data. This work assesses the feasibility of a power controller based on a Fuzzy Rule-Based System (FRBS) by modelling and simulation with a MATLAB algorithm. The controller uses power error and reactor period as inputs and generates reactivity insertion as output. The reactivity insertion is then converted to control rod height using a logistic function based on information from the recorded experimental reactor control rod data. To test the capability of the proposed fuzzy controller, a point-kinetic reactor model is utilized based on the actual PUR-1 operation conditions and a Monte Carlo N-Particle simulation result of the core to numerically compute the neutronics parameters of reactor behavior. The Point Kinetic Equation (PKE) was employed to model dynamic characteristics of the research reactor since it explains the interactions between the spatial and time varying input and output variables efficiently. The controller is demonstrated computationally using various cases: startup, power maneuver, and shutdown. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the reactor power to follow demand power without compromising nuclear safety measures.

A Comparison of Air Pollution in Developed and Developing Cities: A Case Study of London and Beijing

With the rapid development of industrialization, countries in different stages of development in the world have gradually begun to pay attention to the impact of air pollution on health and the environment. Air control in developed countries is an effective reference for air control in developing countries. Artificial intelligence and other technologies also play a positive role in the prediction of air pollution. By comparing the annual changes of pollution in London and Beijing, this paper concludes that the pollution in developed cities is relatively low and stable, while the pollution in Beijing is relatively heavy and unstable, but is clearly improving. In addition, by analyzing the changes of major pollutants in Beijing in the past eight years, it is concluded that all pollutants except O3 show a significant downward trend. In addition, all pollutants except O3 have certain correlation. For example, PM10 and PM2.5 have the greatest influence on air quality index (AQI). Python, which is commonly used by artificial intelligence, is used as the main software to establish two models, support vector machine (SVM) and linear regression. By comparing the two models under the same conditions, it is concluded that SVM has higher accuracy in pollution prediction. The results of this study provide valuable reference for pollution control and prediction in developing countries.

Tailormade Geometric Properties of Chitosan by Gamma Irradiation

Chitosans, CSs, in solution are increasingly used in a range of geometric properties in various academic and industrial sectors, especially in the domain of pharmaceutical and biomedical engineering. In order to provide a tailoring guide of CSs to the applicants, gamma (γ)-irradiation technology and simple viscosity measurements have been used in this study. Accordingly, CS solid discs (0.5 cm thickness and 2.5 cm diameter) were exposed in air to Cobalt-60 (γ)-radiation, at room temperature and constant 50 kGy dose for different periods of exposer time (tγ). Diluted solutions of native and different irradiated CS were then prepared by dissolving 1.25 mg cm-3 of each polymer in 0.1 M NaCl/0.2 M CH3COOH. The single-concentration relative viscosity (ƞr) measurements were employed to obtain their intrinsic viscosity ([ƞ]) values and interrelated parameters, like: the molar mass (Mƞ), hydrodynamic radiuses (RH,ƞ), radius of gyration (RG,ƞ), and second virial coefficient (A2,ƞ) of CSs in the solution. The results show an exponential decrease of ƞr, [ƞ], Mƞ, RH,ƞ and RG,ƞ with increasing tγ. This suggests the influence of random chain-scission of CSs glycosidic bonds, with rate constant kr and kr-1 (lifetime τr ~ 0.017 min-1 and 57.14 min, respectively). The results also show an exponential decrease of A2ƞ with increasing tγ, which can be attributed to the growth of excluded volume effect in CS segments by tγ and, hence, better solution quality. The results are represented in following scaling laws as a tailoring guide to the applicants: RH,ƞ = 6.98 x 10-3 Mr0.65; RG,ƞ = 7.09 x 10-4 Mr0.83; A2,ƞ = 121.03 Mƞ,r-0.19.

Recommended Practice for Experimental Evaluation of the Seepage Sensitivity Damage of Coalbed Methane Reservoirs

The coalbed methane (CBM) extraction industry (an unconventional energy source) has not established guidelines for experimental evaluation of sensitivity damage for coal samples. The existing experimental process of previous researches mainly followed the industry standard for conventional oil and gas reservoirs (CIS). However, the existing evaluation method ignores certain critical differences between CBM reservoirs and conventional reservoirs, which could inevitably result in an inaccurate evaluation of sensitivity damage and, eventually, poor decisions regarding the formulation of formation damage prevention measures. In this study, we propose improved experimental guidelines for evaluating seepage sensitivity damage of CBM reservoirs by leveraging on the shortcomings of the existing methods. The proposed method was established via a theoretical analysis of the main drawbacks of the existing methods and validated through comparative experiments. The results show that the proposed evaluation technique provided reliable experimental results that can better reflect actual reservoir conditions and correctly guide the future development of CBM reservoirs. This study is pioneering the research on the optimization of experimental parameters for efficient exploration and development of CBM reservoirs.

The Public Law Studies: Relationship between Accountability, Environmental Education and Smart Cities

Nowadays, the study of public policies regarding management efficiency is essential. Public policies are about what governments do or do not do, being an area that has grown worldwide, contributing through the knowledge of technologies and methodologies that monitor and evaluate the performance of public administrators. The information published on official government websites needs to provide for transparency and responsiveness of managers. Thus, transparency is a primordial factor for the execution of accountability, providing, in this way, services to the citizen with the expansion of transparent, efficient, democratic information and that value administrative eco-efficiency. The ecologically balanced management of a Smart City must optimize environmental education, building a fairer society, which brings about equality in the use of quality environmental resources. Smart Cities add value in the construction of public management, enabling interaction between people, enhancing environmental education and the practical applicability of administrative eco-efficiency, fostering economic development and improving the quality of life.

Spatial Correlation of Channel State Information in Real LoRa Measurement

The Internet of Things (IoT) is developed to ensure monitoring and connectivity within different applications. Thus, it is critical to study the channel propagation characteristics in Low Power Wide Area Network (LPWAN), especially LoRaWAN. In this paper, an in-depth investigation of the reciprocity between the uplink and downlink Channel State Information (CSI) is done by performing an outdoor measurement campaign in the area of Campus Beaulieu in Rennes. At each different location, the CSI reciprocity is quantified using the Pearson Correlation Coefficient (PCC) which shows a very high linear correlation between the uplink and downlink CSI. This reciprocity feature could be utilized for the physical layer security between the node and the gateway. On the other hand, most of the CSI shapes from different locations are highly uncorrelated with each other. Hence, it can be anticipated that this could achieve significant localization gain by utilizing the frequency hopping in the LoRa systems to get access to a wider band.

Paradigm of Digital Twin Application in Project Management in Architecture, Engineering and Construction

With the growing trend of adoption of advanced technologies like, building information modeling, artificial intelligence, wireless network, the collaboration and integration of these technologies into digital twin become more prominent in architecture, engineering and construction (AEC) industry in view of the nature and scale of AEC industry which efficiently adopted the digital twin. Digital twin is provided to be effective for AEC professions for design and project management. The digital concept is continuously developing and it is vital for AEC professionals and other stakeholders to understand the digital twin concept and the adoption of various advanced building technologies related to the AEC industry. This paper is to review the application of digital twins application in project management in AEC industry and highlight the challenge of AEC partitioners faced by the revolution of technologies including digital twins and building information modelling (BIM) for further research and future study.