Employing QR Code as an Effective Educational Tool for Quick Access to Sources of Kindergarten Concepts

This study discusses a simple solution for the problem of shortage in learning resources for kindergarten teachers. Occasionally, kindergarten teachers cannot access proper resources by usual search methods as libraries or search engines. Furthermore, these methods require a long time and efforts for preparing. The study is expected to facilitate accessing learning resources. Moreover, it suggests a potential direction for using QR code inside the classroom. The present work proposes that QR code can be used for digitizing kindergarten curriculums and accessing various learning resources. It investigates using QR code for saving information related to the concepts which kindergarten teachers use in the current educational situation. The researchers have established a guide for kindergarten teachers based on the Egyptian official curriculum. The guide provides different learning resources for each scientific and mathematical concept in the curriculum, and each learning resource is represented as a QR code image that contains its URL. Therefore, kindergarten teachers can use smartphone applications for reading QR codes and displaying the related learning resources for students immediately. The guide has been provided to a group of 108 teachers for using inside their classrooms. The results showed that the teachers approved the guide, and gave a good response.

Evaluation of Ensemble Classifiers for Intrusion Detection

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Simulation of Soil-Pile Interaction of Steel Batter Piles Penetrated in Sandy Soil Subjected to Pull-Out Loads

Superstructures like offshore platforms, tall buildings, transition towers, skyscrapers and bridges are normally designed to resist compression, uplift and lateral forces from wind waves, negative skin friction, ship impact and other applied loads. Better understanding and the precise simulation of the response of batter piles under the action of independent uplift loads is a vital topic and an area of active research in the field of geotechnical engineering. This paper investigates the use of finite element code (FEC) to examine the behaviour of model batter piles penetrated in dense sand, subjected to pull-out pressure by means of numerical modelling. The concept of the Winkler Model (beam on elastic foundation) has been used in which the interaction between the pile embedded depth and adjacent soil in the bearing zone is simulated by nonlinear p-y curves. The analysis was conducted on different pile slenderness ratios (lc⁄d) ranging from 7.5, 15.22 and 30 respectively. In addition, the optimum batter angle for a model steel pile penetrated in dense sand has been chosen to be 20° as this is the best angle for this simulation as demonstrated by other researcher published in literature. In this numerical analysis, the soil response is idealized as elasto-plastic and the model piles are described as elastic materials for the purpose of simulation. The results revealed that the applied loads affect the pullout pile capacity as well as the lateral pile response for dense sand together with varying shear strength parameters linked to the pile critical depth. Furthermore, the pile pull-out capacity increases with increasing the pile aspect ratios.

Magneto-Thermo-Mechanical Analysis of Electromagnetic Devices Using the Finite Element Method

Fundamental basics of pure and applied research in the area of magneto-thermo-mechanical numerical analysis and design of innovative electromagnetic devices (modern induction heaters, novel thermoelastic actuators, rotating electrical machines, induction cookers, electrophysical devices) are elaborated. Thus, mathematical models of magneto-thermo-mechanical processes in electromagnetic devices taking into account main interactions of interrelated phenomena are developed. In addition, graphical representation of coupled (multiphysics) phenomena under consideration is proposed. Besides, numerical techniques for nonlinear problems solution are developed. On this base, effective numerical algorithms for solution of actual problems of practical interest are proposed, validated and implemented in applied 2D and 3D computer codes developed. Many applied problems of practical interest regarding modern electrical engineering devices are numerically solved. Investigations of the influences of various interrelated physical phenomena (temperature dependences of material properties, thermal radiation, conditions of convective heat transfer, contact phenomena, etc.) on the accuracy of the electromagnetic, thermal and structural analyses are conducted. Important practical recommendations on the choice of rational structures, materials and operation modes of electromagnetic devices under consideration are proposed and implemented in industry.

Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

The Optimization of Engine Mounting Parts Using Hot-Cold Forging Technology

The purpose of this study is to develop a forging process of automotive parts that satisfies the deformation characteristics. The analyses of temperature variation and deformation behavior of the material are important to obtain the optimal forging products. The hot compression test was carried out to know formability at high temperature. In order to define the optimum forging conditions including material temperature, strain and forging load, the commercial finite element analysis code was used to simulate the forging procedure of engine mounting parts. Experimental results were compared with the simulation results by finite element analysis. Test results were in good agreement with the simulations.

Performance Analysis of IDMA Scheme Using Quasi-Cyclic Low Density Parity Check Codes

The next generation mobile communication systems i.e. fourth generation (4G) was developed to accommodate the quality of service and required data rate. This project focuses on multiple access technique proposed in 4G communication systems. It is attempted to demonstrate the IDMA (Interleave Division Multiple Access) technology. The basic principle of IDMA is that interleaver is different for each user whereas CDMA employs different signatures. IDMA inherits many advantages of CDMA such as robust against fading, easy cell planning; dynamic channel sharing and IDMA increase the spectral efficiency and reduce the receiver complexity. In this, performance of IDMA is analyzed using QC-LDPC coding scheme further it is compared with LDPC coding and at last BER is calculated and plotted in MATLAB.

Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

The Integrated Urban Strategies Based on Deep Urban History and Modern Technology Study: Tourism and Leisure Industries as Driving Force to Reactivate Historical Area

Embracing the upcoming era of urbanization with the challenges of limitation of resources, disappearing cultural identities and conflicts among different groups of stakeholders, new integrated approaches are offered in our urban practice to help decision-makers and stakeholders frame and develop well-conceived, practical strategies for urban developing trajectories to approach urban-level sustainability in multiple social, cultural, ecological dimensions. Through bottom-up participation, we take advantage of tourism and leisure industries as driving forces for urbanization in China to promote integrated sustainable systems, with the hope of approaching both historical and ecological aspects of urban sustainability; and also thanks to top-down participation, we have codes, standards and rules established by the governments to strengthen the implementation of ecological urban sustainability. The results are monitored and evaluated experimentally and multidimensionally and the sustainable systems we constructed with local stakeholder groups turned out to be effective. The presentation of our selected projects would indicate our different focuses on urban sustainability.

Stability of Concrete Moment Resisting Frames in View of Current Codes Requirements

In this study, the different approaches currently followed by design codes to assess the stability of buildings utilizing concrete moment resisting frames structural system are evaluated. For such purpose, a parametric study was performed. It involved analyzing group of concrete moment resisting frames having different slenderness ratios (height/width ratios), designed for different lateral loads to vertical loads ratios and constructed using ordinary reinforced concrete and high strength concrete for stability check and overall buckling using code approaches and computer buckling analysis. The objectives were to examine the influence of such parameters that directly linked to frames’ lateral stiffness on the buildings’ stability and evaluates the code approach in view of buckling analysis results. Based on this study, it was concluded that, the most susceptible buildings to instability and magnification of second order effects are buildings having high aspect ratios (height/width ratio), having low lateral to vertical loads ratio and utilizing construction materials of high strength. In addition, the study showed that the instability limits imposed by codes are mainly mathematical to ensure reliable analysis not a physical ones and that they are in general conservative. Also, it has been shown that the upper limit set by one of the codes that second order moment for structural elements should be limited to 1.4 the first order moment is not justified, instead, the overall story check is more reliable.

Analysis of P, d and 3He Elastically Scattered by 11B Nuclei at Different Energies

Elastic scattering of Protons and deuterons from 11B nuclei at different p, d energies have been analyzed within the framework of optical model code (ECIS88). The elastic scattering of 3He+11B nuclear system at different 3He energies have been analyzed using double folding model code (FRESCO). The real potential obtained from the folding model was supplemented by a phenomenological imaginary potential, and during the fitting process the real potential was normalized and the imaginary potential optimized. Volumetric integrals of the real and imaginary potential depths (JR, JW) have been calculated for 3He+11B system. The agreement between the experimental data and the theoretical calculations in the whole angular range is fairly good. Normalization factor Nr is calculated in the range between 0.70 and 1.236.

Saliva Cortisol and Yawning as a Predictor of Neurological Disease

Cortisol is important to our immune system, regulates our stress response, and is a factor in maintaining brain temperature. Saliva cortisol is a practical and useful non-invasive measurement that signifies the presence of the important hormone. Electrical activity in the jaw muscles typically rises when the muscles are moved during yawning and the electrical level is found to be correlated with the cortisol level. In two studies using identical paradigms, a total of 108 healthy subjects were exposed to yawning-provoking stimuli so that their cortisol levels and electrical nerve impulses from their jaw muscles was recorded. Electrical activity is highly correlated with cortisol levels in healthy people. The Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, health details were collected and exclusion criteria applied for voluntary recruitment: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners as compared with the non-yawners between rest and post-stimuli. Significant evidence supports the Thompson Cortisol Hypothesis that suggests rises in cortisol levels are associated with yawning. Ethics approval granted and professional code of conduct, confidentiality, and safety issues are approved therein.

In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Applications for Accounting of Inherited Object-Oriented Class Members

A class in an Object-Oriented (OO) system is the basic unit of design, and it encapsulates a set of attributes and methods. In OO systems, instead of redefining the attributes and methods that are included in other classes, a class can inherit these attributes and methods and only implement its unique attributes and methods, which results in reducing code redundancy and improving code testability and maintainability. Such mechanism is called Class Inheritance. However, some software engineering applications may require accounting for all the inherited class members (i.e., attributes and methods). This paper explains how to account for inherited class members and discusses the software engineering applications that require such consideration.

Evaluation of Behavior Factor for Steel Moment-Resisting Frames

According to current seismic codes the structures are calculated using the capacity design procedure based on the concept of shear at the base depending on several parameters including behavior factor which is considered to be the most important parameter. The behavior factor allows designing the structure when it is at its ultimate limit state taking into account its energy dissipation through its plastic deformation. The aim of the present study is to assess the basic parameters on which is composed the behavior factor among them the reduction factor due to ductility, and those due to redundancy and the overstrength for steel moment-resisting frames of different heights and regular configuration. Analyses are conducted on these frames using the nonlinear static method where the effect of some parameters on the behavior factor, such as the number of stories and the number of spans, are taken into account. The results show that the behavior factor is rather sensitive to the variation of the number of stories and bays.

Leading, Teaching and Learning “in the Middle”: Experiences, Beliefs, and Values of Instructional Leaders, Teachers, and Students in Finland, Germany, and Canada

Through the exploration of the lived experiences, beliefs and values of instructional leaders, teachers and students in Finland, Germany and Canada, we investigated the factors which contribute to developmentally responsive, intellectually engaging middle-level learning environments for early adolescents. Student-centred leadership dimensions, effective instructional practices and student agency were examined through the lens of current policy and research on middle-level learning environments emerging from the Canadian province of Manitoba. Consideration of these three research perspectives in the context of early adolescent learning, placed against an international backdrop, provided a previously undocumented perspective on leading, teaching and learning in the middle years. Aligning with a social constructivist, qualitative research paradigm, the study incorporated collective case study methodology, along with constructivist grounded theory methods of data analysis. Data were collected through semi-structured individual and focus group interviews and document review, as well as direct and participant observation. Three case study narratives were developed to share the rich stories of study participants, who had been selected using maximum variation and intensity sampling techniques. Interview transcript data were coded using processes from constructivist grounded theory. A cross-case analysis yielded a conceptual framework highlighting key factors that were found to be significant in the establishment of developmentally responsive, intellectually engaging middle-level learning environments. Seven core categories emerged from the cross-case analysis as common to all three countries. Within the visual conceptual framework (which depicts the interconnected nature of leading, teaching and learning in middle-level learning environments), these seven core categories were grouped into Essential Factors (student agency, voice and choice), Contextual Factors (instructional practices; school culture; engaging families and the community), Synergistic Factors (instructional leadership) and Cornerstone Factors (education as a fundamental cultural value; preservice, in-service and ongoing teacher development). In addition, sub-factors emerged from recurring codes in the data and identified specific characteristics and actions found in developmentally responsive, intellectually engaging middle-level learning environments. Although this study focused on 12 schools in Finland, Germany and Canada, it informs the practice of educators working with early adolescent learners in middle-level learning environments internationally. The authentic voices of early adolescent learners are the most important resource educators have to gauge if they are creating effective learning environments for their students. Ongoing professional dialogue and learning is essential to ensure teachers are supported in their work and develop the pedagogical practices needed to meet the needs of early adolescent learners. It is critical to balance consistency, coherence and dependability in the school environment with the necessary flexibility in order to support the unique learning needs of early adolescents. Educators must intentionally create a school culture that unites teachers, students and their families in support of a common purpose, as well as nurture positive relationships between the school and its community. A large, urban school district in Canada has implemented a school cohort-based model to begin to bring developmentally responsive, intellectually engaging middle-level learning environments to scale.

A Hybrid P2P Storage Scheme Based on Erasure Coding and Replication

A peer-to-peer storage system has challenges like; peer availability, data protection, churn rate. To address these challenges different redundancy, replacement and repair schemes are used. This paper presents a hybrid scheme of redundancy using replication and erasure coding. We calculate and compare the storage, access, and maintenance costs of our proposed scheme with existing redundancy schemes. For realistic behaviour of peers a trace of live peer-to-peer system is used. The effect of different replication, and repair schemes are also shown. The proposed hybrid scheme performs better than existing double coding hybrid scheme in all metrics and have an improved maintenance cost than hierarchical codes.

Origins of Strict Liability for Abnormally Dangerous Activities in the United States, Rylands v. Fletcher and a General Clause of Strict Liability in the UK

The paper reveals the birth and evolution of the British precedent Rylands v. Fletcher that, once adopted on the other side of the Ocean (in United States), gave rise to a general clause of liability for abnormally dangerous activities recognized by the §20 of the American Restatements of the Law Third, Liability for Physical and Emotional Harm. The main goal of the paper was to analyze the development of the legal doctrine and of the case law posterior to the precedent together with the intent of the British judicature to leapfrog from the traditional rule contained in Rylands v. Fletcher to a general clause similar to that introduced in the United States and recently also on the European level. As it is well known, within the scope of tort law two different initiatives compete with the aim of harmonizing the European laws: European Group on Tort Law with its Principles of European Tort Law (hereinafter PETL) in which article 5:101 sets forth a general clause for strict liability for abnormally dangerous activities and Study Group on European Civil Code with its Common Frame of Reference (CFR) which promotes rather ad hoc model of listing out determined cases of strict liability. Very narrow application scope of the art. 5:101 PETL, restricted only to abnormally dangerous activities, stays in opposition to very broad spectrum of strict liability cases governed by the CFR. The former is a perfect example of a general clause that offers a minimum and basic standard, possibly acceptable also in those countries in which, like in the United Kingdom, this regime of liability is completely marginalized.

Coding Considerations for Standalone Molecular Dynamics Simulations of Atomistic Structures

The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

Four Phase Methodology for Developing Secure Software

A simple and robust approach for developing secure software. A Four Phase methodology consists in developing the non-secure software in phase one, and for the next three phases, one phase for each of the secure developing types (i.e. self-protected software, secure code transformation, and the secure shield). Our methodology requires first the determination and understanding of the type of security level needed for the software. The methodology proposes the use of several teams to accomplish this task. One Software Engineering Developing Team, a Compiler Team, a Specification and Requirements Testing Team, and for each of the secure software developing types: three teams of Secure Software Developing, three teams of Code Breakers, and three teams of Intrusion Analysis. These teams will interact among each other and make decisions to provide a secure software code protected against a required level of intruder.