Uvulars Alternation in Hasawi Arabic: A Harmonic Serialism Approach

This paper investigates a phonological phenomenon, which exhibits variation ‘alternation’ in terms of the uvular consonants [q] and [ʁ] in Hasawi Arabic. This dialect is spoken in Alahsa city, which is located in the Eastern province of Saudi Arabia. To the best of our knowledge, no such research has systematically studied this phenomenon in Hasawi Arabic dialect. This paper is significant because it fills the gap in the literature about this alternation phenomenon in this understudied dialect. A large amount of the data is extracted from several interviews the author has conducted with 10 participants, native speakers of the dialect, and complemented by additional forms from social media. The latter method of collecting the data adds to the significance of the research. The analysis of the data is carried out in Harmonic Serialism Optimality Theory (HS-OT), a version of the Optimality Theoretic (OT) framework, which holds that linguistic forms are the outcome of the interaction among violable universal constraints, and in the recent development of OT into a model that accounts for linguistic variation in harmonic derivational steps. This alternation process is assumed to be phonologically unconditioned and in free variation in other varieties of Arabic dialects in the area. The goal of this paper is to investigate whether this phenomenon is in free variation or governed, what governs this alternation between [q] and [ʁ] and whether the alternation is phonological or other linguistic constraints are in action. The results show that the [q] and [ʁ] alternation is not free and it occurs due to different assimilation processes. Positional, segmental sequence and vowel adjacency factors are in action in Hasawi Arabic.

How Children Synchronize with Their Teacher: Evidence from a Real-World Elementary School Classroom

This paper reports on how synchrony occurs between children and their teacher, and what prevents or facilitates synchrony. The aim of the experiment conducted in this study was to precisely analyze their movements and synchrony and reveal the process of synchrony in a real-world classroom. Specifically, the experiment was conducted for around 20 minutes during an English as a foreign language (EFL) lesson. The participants were 11 fourth-grade school children and their classroom teacher in a public elementary school in Japan. Previous researchers assert that synchrony causes the state of flow in a class. For checking the level of flow, Short Flow State Scale (SFSS) was adopted. The experimental procedure had four steps: 1) The teacher read aloud the first half of an English storybook to the children. Both the teacher and the children were at their own desks. 2) The children were subjected to an SFSS check. 3) The teacher read aloud the remaining half of the storybook to the children. She made the children remove their desks before reading. 4) The children were again subjected to an SFSS check. The movements of all participants were recorded with a video camera. From the movement analysis, it was found that the children synchronized better with the teacher in Step 3 than in Step 1, and that the teacher’s movement became free and outstanding without a desk. This implies that the desk acted as a barrier between the children and the teacher. Removal of this barrier resulted in the children’s reactions becoming synchronized with those of the teacher. The SFSS results proved that the children experienced more flow without a barrier than with a barrier. Apparently, synchrony is what caused flow or social emotions in the classroom. The main conclusion is that synchrony leads to cognitive outcomes such as children’s academic performance in EFL learning.

The Importance of Analysis of Internal Quality Management Systems and Self-Examination Processes in Engineering Accreditation Processes

The accreditation process of engineering degree programmes is based on various reports evaluated by the relevant governing bodies of the institution of higher education. One of the aforementioned reports for the accreditation process is a self-assessment report which is to be completed by the applying institution. This paper seeks to emphasise the importance of analysis of internal quality management systems and self-examination processes in the engineering accreditation processes. A description of how the programme fulfils the criteria should be given. Relevant stakeholders all need to contribute in the writing and structuring of the self-assessment report. The last step is to gather evidence in the form of supporting documentation. In conclusion, the paper also identifies learning outcomes in a case study in seeking accreditation from an international relevant professional body.

From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion

Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances.

Improving the Voltage Level in High Voltage Direct Current Systems by Using Modular Multilevel Converter

This paper presented an intend scheme of Modular-Multilevel-Converter (MMC) levels for move towering the practical conciliation flanked by the precision and divisional competence. The whole process is standard by a Thevenin-equivalent 133-level MMC model. Firstly the computation scheme of the fundamental limit imitation time step is offered to faithfully represent each voltage level of waveforms. Secondly the earlier industrial Improved Analytic Hierarchy Process (IAHP) is adopted to integrate the relative errors of all the input electrical factors interested in one complete virtual fault on each converter level. Thirdly the stable AC and DC ephemeral condition in virtual faults effects of all the forms stabilize and curve integral stand on the standard form. Finally the optimal MMC level will be obtained by the drown curves and it will give individual weights allowing for the precision and efficiency. And the competence and potency of the scheme are validated by model on MATLAB Simulink.

Main Cause of Children's Deaths in Indigenous Wayuu Community from Department of La Guajira: A Research Developed through Data Mining Use

The main purpose of this research is to discover what causes death in children of the Wayuu community, and deeply analyze those results in order to take corrective measures to properly control infant mortality. We consider important to determine the reasons that are producing early death in this specific type of population, since they are the most vulnerable to high risk environmental conditions. In this way, the government, through competent authorities, may develop prevention policies and the right measures to avoid an increase of this tragic fact. The methodology used to develop this investigation is data mining, which consists in gaining and examining large amounts of data to produce new and valuable information. Through this technique it has been possible to determine that the child population is dying mostly from malnutrition. In short, this technique has been very useful to develop this study; it has allowed us to transform large amounts of information into a conclusive and important statement, which has made it easier to take appropriate steps to resolve a particular situation.

Reducing Later Life Loneliness: A Systematic Literature Review of Loneliness Interventions

Later life loneliness is a social issue that is increasing alongside an upward global population trend. As a society, one way that we have responded to this social challenge is through developing non-pharmacological interventions such as befriending services, activity clubs, meet-ups, etc. Through a systematic literature review, this paper suggests that currently there is an underrepresentation of radical innovation, and underutilization of digital technologies in developing loneliness interventions for older adults. This paper examines intervention studies that were published in English language, within peer reviewed journals between January 2005 and December 2014 across 4 electronic databases. In addition to academic databases, interventions found in grey literature in the form of websites, blogs, and Twitter were also included in the overall review. This approach yielded 129 interventions that were included in the study. A systematic approach allowed the minimization of any bias dictating the selection of interventions to study. A coding strategy based on a pattern analysis approach was devised to be able to compare and contrast the loneliness interventions. Firstly, interventions were categorized on the basis of their objective to identify whether they were preventative, supportive, or remedial in nature. Secondly, depending on their scope, they were categorized as one-to-one, community-based, or group based. It was also ascertained whether interventions represented an improvement, an incremental innovation, a major advance or a radical departure, in comparison to the most basic form of a loneliness intervention. Finally, interventions were also assessed on the basis of the extent to which they utilized digital technologies. Individual visualizations representing the four levels of coding were created for each intervention, followed by an aggregated visual to facilitate analysis. To keep the inquiry within scope and to present a coherent view of the findings, the analysis was primarily concerned the level of innovation, and the use of digital technologies. This analysis highlights a weak but positive correlation between the level of innovation and the use of digital technologies in designing and deploying loneliness interventions, and also emphasizes how certain existing interventions could be tweaked to enable their migration from representing incremental innovation to radical innovation for example. This analysis also points out the value of including grey literature, especially from Twitter, in systematic literature reviews to get a contemporary view of latest work in the area under investigation.

An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

An Improved Total Variation Regularization Method for Denoising Magnetocardiography

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

EEG-Based Screening Tool for School Student’s Brain Disorders Using Machine Learning Algorithms

Attention-Deficit/Hyperactivity Disorder (ADHD), epilepsy, and autism affect millions of children worldwide, many of which are undiagnosed despite the fact that all of these disorders are detectable in early childhood. Late diagnosis can cause severe problems due to the late treatment and to the misconceptions and lack of awareness as a whole towards these disorders. Moreover, electroencephalography (EEG) has played a vital role in the assessment of neural function in children. Therefore, quantitative EEG measurement will be utilized as a tool for use in the evaluation of patients who may have ADHD, epilepsy, and autism. We propose a screening tool that uses EEG signals and machine learning algorithms to detect these disorders at an early age in an automated manner. The proposed classifiers used with epilepsy as a step taken for the work done so far, provided an accuracy of approximately 97% using SVM, Naïve Bayes and Decision tree, while 98% using KNN, which gives hope for the work yet to be conducted.

On the Efficiency of Five Step Approximation Method for the Solution of General Third Order Ordinary Differential Equations

In this work, a five step continuous method for the solution of third order ordinary differential equations was developed in block form using collocation and interpolation techniques of the shifted Legendre polynomial basis function. The method was found to be zero-stable, consistent and convergent. The application of the method in solving third order initial value problem of ordinary differential equations revealed that the method compared favorably with existing methods.

Waste Management in a Hot Laboratory of Japan Atomic Energy Agency – 1: Overview and Activities in Chemical Processing Facility

Chemical Processing Facility of Japan Atomic Energy Agency is a basic research field for advanced back-end technology developments with using actual high-level radioactive materials such as irradiated fuels from the fast reactor, high-level liquid waste from reprocessing plant. In the nature of a research facility, various kinds of chemical reagents have been offered for fundamental tests. Most of them were treated properly and stored in the liquid waste vessel equipped in the facility, but some were not treated and remained at the experimental space as a kind of legacy waste. It is required to treat the waste in safety. On the other hand, we formulated the Medium- and Long-Term Management Plan of Japan Atomic Energy Agency Facilities. This comprehensive plan considers Chemical Processing Facility as one of the facilities to be decommissioned. Even if the plan is executed, treatment of the “legacy” waste beforehand must be a necessary step for decommissioning operation. Under this circumstance, we launched a collaborative research project called the STRAD project, which stands for Systematic Treatment of Radioactive liquid waste for Decommissioning, in order to develop the treatment processes for wastes of the nuclear research facility. In this project, decomposition methods of chemicals causing a troublesome phenomenon such as corrosion and explosion have been developed and there is a prospect of their decomposition in the facility by simple method. And solidification of aqueous or organic liquid wastes after the decomposition has been studied by adding cement or coagulants. Furthermore, we treated experimental tools of various materials with making an effort to stabilize and to compact them before the package into the waste container. It is expected to decrease the number of transportation of the solid waste and widen the operation space. Some achievements of these studies will be shown in this paper. The project is expected to contribute beneficial waste management outcome that can be shared world widely.

A Single Switch High Step-Up DC/DC Converter with Zero Current Switching Condition

This paper presents an inverting high step-up DC/DC converter. Basically, this high step-up DC/DC converter is an appealing interface for solar applications. The proposed topology takes advantage of using coupled inductors. Due to the leakage inductances of these coupled inductors, the power MOSFET has the zero current switching (ZCS) condition, which results in decreased switching losses. This will substantially improve the overall efficiency of the power converter. Furthermore, employing coupled inductors has led to a higher voltage gain. Theoretical analysis and experimental results of a 100W 20V/220V prototype are presented to verify the superior performance of the proposed DC/DC converter.

Collaborative Stylistic Group Project: A Drama Practical Analysis Application

In the course of teaching stylistics to undergraduate students of the Department of English Language and Literature, Faculty of Arts and Humanities, the linguistic tool kit of theories comes in handy and useful for the better understanding of the different literary genres: Poetry, drama, and short stories. In the present paper, a model of teaching of stylistics is compiled and suggested. It is a collaborative group project technique for use in the undergraduate diverse specialisms (Literature, Linguistics and Translation tracks) class. Students initially are introduced to the different linguistic tools and theories suitable for each literary genre. The second step is to apply these linguistic tools to texts. Students are required to watch videos performing the poems or play, for example, and search the net for interpretations of the texts by other authorities. They should be using a template (prepared by the researcher) that has guided questions leading students along in their analysis. Finally, a practical analysis would be written up using the practical analysis essay template (also prepared by the researcher). As per collaborative learning, all the steps include activities that are student-centered addressing differentiation and considering their three different specialisms. In the process of selecting the proper tools, the actual application and analysis discussion, students are given tasks that request their collaboration. They also work in small groups and the groups collaborate in seminars and group discussions. At the end of the course/module, students present their work also collaboratively and reflect and comment on their learning experience. The module/course uses a drama play that lends itself to the task: ‘The Bond’ by Amy Lowell and Robert Frost. The project results in an interpretation of its theme, characterization and plot. The linguistic tools are drawn from pragmatics, and discourse analysis among others.

The Use of Different Methodological Approaches to Teaching Mathematics at Secondary Level

The article describes methods of preparation of future teachers that includes the entire diversity of traditional and computer-oriented methodological approaches. The authors reveal how, in the specific educational environment, a teacher can choose the most effective combination of educational technologies based on the nature of the learning task. The key conditions that determine such a choice are that the methodological approach corresponds to the specificity of the problem being solved and that it is also responsive to the individual characteristics of the students. The article refers to the training of students in the proper use of mathematical electronic tools for educational purposes. The preparation of future mathematics teachers should be a step-by-step process, building on specific examples. At the first stage, students optimally solve problems aided by electronic means of teaching. At the second stage, the main emphasis is on modeling lessons. At the third stage, students develop and implement strategies in the study of one of the topics within a school mathematics curriculum. The article also recommended the implementation of this strategy in preparation of future teachers and stated the possible benefits.

Identifying Game Variables from Students’ Surveys for Prototyping Games for Learning

Games-based learning (GBL) has become increasingly important in teaching and learning. This paper explains the first two phases (analysis and design) of a GBL development project, ending up with a prototype design based on students’ and teachers’ perceptions. The two phases are part of a full cycle GBL project aiming to help secondary school students in Thailand in their study of Comprehensive Sex Education (CSE). In the course of the study, we invited 1,152 students to complete questionnaires and interviewed 12 secondary school teachers in focus groups. This paper found that GBL can serve students in their learning about CSE, enabling them to gain understanding of their sexuality, develop skills, including critical thinking skills and interact with others (peers, teachers, etc.) in a safe environment. The objectives of this paper are to outline the development of GBL variables from the research question(s) into the developers’ flow chart, to be responsive to the GBL beneficiaries’ preferences and expectations, and to help in answering the research questions. This paper details the steps applied to generate GBL variables that can feed into a game flow chart to develop a GBL prototype. In our approach, we detailed two models: (1) Game Elements Model (GEM) and (2) Game Object Model (GOM). There are three outcomes of this research – first, to achieve the objectives and benefits of GBL in learning, game design has to start with the research question(s) and the challenges to be resolved as research outcomes. Second, aligning the educational aims with engaging GBL end users (students) within the data collection phase to inform the game prototype with the game variables is essential to address the answer/solution to the research question(s). Third, for efficient GBL to bridge the gap between pedagogy and technology and in order to answer the research questions via technology (i.e. GBL) and to minimise the isolation between the pedagogists “P” and technologist “T”, several meetings and discussions need to take place within the team.

The Two Layers of Food Safety and GMOs in the Hungarian Agricultural Law

The study presents the complexity of food safety dividing it into two layers. Beyond the basic layer of requirements, there is a more demanding higher level linked with quality and purity aspects. It would be important to give special prominence to both layers, given that massive illnesses are caused by foods even though officially licensed. Then the study discusses an exciting safety challenge stemming from the risks of genetically modified organisms (GMOs). Furthermore, it features legal case examples that illustrate how certain liability questions are solved or not yet decided in connection with the production of genetically modified crops. In addition, a special kind of land grabbing, more precisely land grabbing from non-GMO farming systems can also be noticed as well as a new phenomenon eroding food sovereignty. Coexistence, the state where organic, conventional, and GM farming systems are standing alongside each other is an unsuitable experiment that cannot be successful, because of biophysical reasons (such as cross-pollination). Agricultural and environmental lawyers both try to find the optimal solution. Agri-environmental measures are introduced as a special subfield of law maintaining also food safety. The important steps of agri-environmental legislation are aiming at the protection of natural values, the environmental media and strengthening food safety as well, practically the quality of agricultural products intended for human consumption. The major findings of the study focus on searching for the appropriate approach capable of solving the security and safety problems of food production. The most interesting concepts of the Hungarian national and EU food law legislation are analyzed in more detail with descriptive, analytic and comparative methods.

Managing the Baltic Sea Region Resilience: Prevention, Treatment Actions and Circular Economy

The worldwide future sustainable economies are oriented towards the sea: the maritime economy is becoming one of the strongest driving forces in many regions as population growth is the highest in coastal areas. For hundreds of years sea resources were depleted unsustainably by fishing, mining, transportation, tourism, and waste. European Sustainable Development Strategy is identifying and developing actions to enable the EU to achieve a continuous, long-term improvement of the quality of life through the creation of sustainable communities. The aim of this paper is to provide insight in Baltic Sea Region case studies on implemented actions on tourism industry waste and beach wrack management in coastal areas, hazardous contaminants and plastic flow treatment from waste, wastewaters and stormwaters. These projects mentioned in study promote successful prevention of contaminant flows to the sea environments and provide perspectives for creation of valuable new products from residuals for future circular economy are the step forward to green innovation winning streak.

Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.