Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Low Cost Microcontroller Based ECG Machine

Electrocardiographic (ECG) machine is an important equipment to diagnose heart problems. Besides, the ECG signals are used to detect many other features of human body and behavior. But it is not so cheap and simple in operation to be used in the countries like Bangladesh, where most of the people are very low income earners. Therefore, in this paper, we have tried to implement a simple and portable ECG machine. Since Arduino Uno microcontroller is very cheap, we have used it in our system to minimize the cost. Our designed system is powered by a 2-voltage level DC power supply. It provides wireless connectivity to have ECG data either in smartphone having android operating system or a PC/laptop having Windows operating system. To get the data, a graphic user interface has been designed. Android application has also been made using IDE for Android 2.3 and API 10. Since it requires no USB host API, almost 98% Android smartphones, available in the country, will be able to use it. We have calculated the heart rate from the measured ECG by our designed machine and by an ECG machine of a reputed diagnostic center in Dhaka city for the same people at the same time on same day. Then we calculated the percentage of errors between the readings of two machines and computed its average. From this computation, we have found out that the average percentage of error is within an acceptable limit.

Ghost Frequency Noise Reduction through Displacement Deviation Analysis

Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.

Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc

The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.

Mistranslation in Cross Cultural Communication: A Discourse Analysis on Former President Bush’s Speech in 2001

The differences in languages play a big role in cross-cultural communication. If meanings are not translated accurately, the risk can be crucial not only on an interpersonal level, but also on the international and political levels. The use of metaphorical language by politicians can cause great confusion, often leading to statements being misconstrued. In these situations, it is the translators who struggle to put forward the intended meaning with clarity and this makes translation an important field to study and analyze when it comes to cross-cultural communication. Owing to the growing importance of language and the power of translation in politics, this research analyzes part of President Bush’s speech in 2001 in which he used the word “Crusade” which caused his statement to be misconstrued. The research uses a discourse analysis of cross-cultural communication literature which provides answers supported by historical, linguistic, and communicative perspectives. The first finding indicates that the word ‘crusade’ carries different meaning and significance in the narratives of the Western world when compared to the Middle East. The second one is that, linguistically, maintaining cultural meanings through translation is quite difficult and challenging. Third, when it comes to the cross-cultural communication perspective, the common and frequent usage of literal translation is a sign of poor strategies being followed in translation training. Based on the example of Bush’s speech, this paper hopes to highlight the weak practices in translation in cross-cultural communication which are still commonly used across the world. Translation studies have to take issues such as this seriously and attempt to find a solution. In every language, there are words and phrases that have cultural, historical and social meanings that are woven into the language. Literal translation is not the solution for this problem because that strategy is unable to convey these meanings in the target language.

Tibyan Automated Arabic Correction Using Machine-Learning in Detecting Syntactical Mistakes

The Arabic language is one of the most important languages. Learning it is so important for many people around the world because of its religious and economic importance and the real challenge lies in practicing it without grammatical or syntactical mistakes. This research focused on detecting and correcting the syntactic mistakes of Arabic syntax according to their position in the sentence and focused on two of the main syntactical rules in Arabic: Dual and Plural. It analyzes each sentence in the text, using Stanford CoreNLP morphological analyzer and machine-learning approach in order to detect the syntactical mistakes and then correct it. A prototype of the proposed system was implemented and evaluated. It uses support vector machine (SVM) algorithm to detect Arabic grammatical errors and correct them using the rule-based approach. The prototype system has a far accuracy 81%. In general, it shows a set of useful grammatical suggestions that the user may forget about while writing due to lack of familiarity with grammar or as a result of the speed of writing such as alerting the user when using a plural term to indicate one person.

Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Numerical Study of Steel Structures Responses to External Explosions

Due to the constant increase in terrorist attacks, the research and engineering communities have given significant attention to building performance under explosions. This paper presents a methodology for studying and simulating the dynamic responses of steel structures during external detonations, particularly for accurately investigating the impact of incrementing charge weight on the members total behavior, resistance and failure. Prediction damage method was introduced to evaluate the damage level of the steel members based on five scenarios of explosions. Johnson–Cook strength and failure model have been used as well as ABAQUS finite element code to simulate the explicit dynamic analysis, and antecedent field tests were used to verify the acceptance and accuracy of the proposed material strength and failure model. Based on the structural response, evaluation criteria such as deflection, vertical displacement, drift index, and damage level; the obtained results show the vulnerability of steel columns and un-braced steel frames which are designed and optimized to carry dead and live load to resist and endure blast loading.

Analytical Authentication of Butter Using Fourier Transform Infrared Spectroscopy Coupled with Chemometrics

Fourier Transform Infrared (FT-IR) spectroscopy coupled with chemometrics was used to distinguish between butter samples and non-butter samples. Further, quantification of the content of margarine in adulterated butter samples was investigated. Fingerprinting region (1400-800 cm–1) was used to develop unsupervised pattern recognition (Principal Component Analysis, PCA), supervised modeling (Soft Independent Modelling by Class Analogy, SIMCA), classification (Partial Least Squares Discriminant Analysis, PLS-DA) and regression (Partial Least Squares Regression, PLS-R) models. PCA of the fingerprinting region shows a clustering of the two sample types. All samples were classified in their rightful class by SIMCA approach; however, nine adulterated samples (between 1% and 30% w/w of margarine) were classified as belonging both at the butter class and at the non-butter one. In the two-class PLS-DA model’s (R2 = 0.73, RMSEP, Root Mean Square Error of Prediction = 0.26% w/w) sensitivity was 71.4% and Positive Predictive Value (PPV) 100%. Its threshold was calculated at 7% w/w of margarine in adulterated butter samples. Finally, PLS-R model (R2 = 0.84, RMSEP = 16.54%) was developed. PLS-DA was a suitable classification tool and PLS-R a proper quantification approach. Results demonstrate that FT-IR spectroscopy combined with PLS-R can be used as a rapid, simple and safe method to identify pure butter samples from adulterated ones and to determine the grade of adulteration of margarine in butter samples.

Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Experimental Study of Discharge with Sharp-Crested Weirs

In this study the water flow in an open channel over a sharp-crested weir is investigated experimentally. For this reason a series of laboratory experiments were performed in an open channel with a sharp-crested weir. The maximum head expected over the weir, the total upstream water height and the downstream water height of the impact in the constant bed of the open channel were measured. The discharge was measured using a tank put right after the open channel. In addition, the discharge and the upstream velocity were also calculated using already known equations. The main finding is that the relative error percentage for the majority of the experimental measurements is ± 4%, meaning that the calculation of the discharge with a sharp-crested weir gives very good results compared to the numerical results from known equations.

Understanding How Money Laundering and Financing of Terrorism Are Conducted through the Real Estate Sector in the Middle East and North Africa Region

This research seeks to identify how money laundering activities are executed through the real estate sector. This article provides academics with literature on the topic and provides scholars, and practitioners with a better understanding of the risks and challenges involved. Data are gathered through survey in the Middle East and North Africa region and review of the available research. The results of the analysis will help identifying the factors attracting criminals to the real estate sector and develop an understanding of the methods used to launder illicit funds through this sector and the indicators of suspicion for reporting entities. Further analysis reveals the risks posed by money laundering and terrorist financing on the real estate sector and challenges facing states in this regard.

The Conduct of Laundering Money through Transport of Cash in the Middle East and North Africa Region

This article mainly aims to detect and understand how money laundering activities are executed by transport of cash, identifying the underlying factors and separating legitimate from illegitimate usage of cash and how it is being used. This research provides academics with additional literature and provides bank supervisors and practitioners with a better understanding of sources and uses of cash in criminal activities and how cash is used in the laundering mechanism. Data are gathered through survey in the Middle East and North Africa region and review of the available research. The results of the analysis will help distinguish the factors affecting preference for cash rather other payment instruments in the region, identify what causes the tendency to launder illegal proceeds through cash transportation and how illegal cash is being laundered and moved. On the other hand, this paper sheds the light on major cash generating criminal activities, its sources and main destinations.

Modelling and Control of Milk Fermentation Process in Biochemical Reactor

The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.

Countering Radicalization to Violent Extremism: A Comparative Study of Canada, the UK and South East Asia

Recent high-profile terrorist events in Canada, the United Kingdom and Europe – the London Bridge attacks, the terrorist attacks in Nice, France and Barcelona, Spain, the 2014 Ottawa Parliament attacks and the 2017 attacks in Edmonton – have all raised levels of public and academic concern with so-called “lone-wolf” and “radicalized” terrorism. Similarly, several countries outside of the “Western” world have been dealing with radicalization to violent extremism for several years. Many South East Asian countries, including Indonesia, Malaysia, Singapore and the Philippines have all had experience with what might be described as ISIS or extremist-inspired acts of terrorism. Indeed, it appears the greatest strength of groups such as ISIS has been their ability to spread a global message of violent extremism that has led to radicalization in markedly different jurisdictions throughout the world. These markedly different jurisdictions have responded with counter-radicalization strategies that warrant further comparative analysis. This paper utilizes an inter-disciplinary legal methodology. In doing so, it compares legal, political, cultural and historical aspects of the counter-radicalization strategies employed by Canada, the United Kingdom and several South East Asian countries (Indonesia, Malaysia, Singapore and the Philippines). Whilst acknowledging significant legal and political differences between these jurisdictions, the paper engages in these analyses with an eye towards understanding which best practices might be shared between the jurisdictions. In doing so, it presents valuable findings of a comparative nature that are useful to both academic and practitioner audiences in several jurisdictions.

Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods