Sex Education: A Need for Students with Disabilities in India

Sexuality remains a personal or a private matter of discussion in the Indian society and generally discussed among the same age group or gender. Complete absence of the sex education has caused serious implications for the students with disabilities in Indian society. There are widespread perceptions that student with disabilities are ‘asexual’, ‘unattractive’ and therefore cannot be considered sexually desirable. Such perceptions continue to reinforce the other perceptions that student with disabilities are somehow incapable of being in an intimate relationship in the life and therefore they do not need any learning related to the sex education. We need to understand that if a student has a disability, it does not mean that student have no emotional feelings, hormones and sexual desires like any other student without disability. Sexuality is an integral part of every human life and should not be seen as matter of shame and guilt. Unfortunately, the concept of the sex education is misunderstood in itself. Instead of realizing the crucial importance of sex education for the students with disabilities or non-disabilities, it is often considered mainly as an education about ‘how to have sexual intercourse’. One needs to understand that it is not just about sexual conduct but also about the gender and sexual identity, self-esteem, self protection and acceptance of self. This research paper examined issues and debates around the sex education, particularly in context of the students with disabilities in India and focuses on how students with disabilities themselves see the need of sex (health) education. To understand their perceptions, descriptive survey method was used. It was found that most of the students among respondent were comfortable and felt it as a strong need for such orientation during their schooling. The paper emphasizes that sex education is a need of the time and further a necessity. Hence it is important for our education system to implement it for the complete well being of the students with disabilities.

The Latency-Amplitude Binomial of Waves Resulting from the Application of Evoked Potentials for the Diagnosis of Dyscalculia

Recent advances in cognitive neuroscience have allowed a step forward in perceiving the processes involved in learning from the point of view of acquiring new information or the modification of existing mental content. The evoked potentials technique reveals how basic brain processes interact to achieve adequate and flexible behaviours. The objective of this work, using evoked potentials, is to study if it is possible to distinguish if a patient suffers a specific type of learning disorder to decide the possible therapies to follow. The methodology used in this work is to analyze the dynamics of different brain areas during a cognitive activity to find the relationships between the other areas analyzed to understand the functioning of neural networks better. Also, the latest advances in neuroscience have revealed the exis-tence of different brain activity in the learning process that can be highlighted through the use of non-invasive, innocuous, low-cost and easy-access techniques such as, among others, the evoked potentials that can help to detect early possible neurodevelopmental difficulties for their subsequent assessment and therapy. From the study of the amplitudes and latencies of the evoked potentials, it is possible to detect brain alterations in the learning process, specifically in dyscalculia, to achieve specific corrective measures for the application of personalized psycho-pedagogical plans that allow obtaining an optimal integral development of the affected people.

An Examination of the Factors Affecting the Adoption of Cloud Enterprise Resource Planning Systems in Egyptian Companies

Enterprise resource planning (ERP) is an integrated system that helps companies in managing their resources. There are two types of ERP systems, the traditional ERP systems, and the cloud ERP systems. Cloud ERP systems were introduced after the development of cloud computing technology. This research aims to identify the factors that affect the adoption of cloud ERP in Egyptian companies. Moreover, the aim of our study is to provide guidance to Egyptian companies in the cloud ERP adoption decision and to participate in increasing the number of the cloud ERP studies that are conducted in the Middle East and in developing countries. There are many factors influencing the adoption of cloud ERP in Egyptian organizations which are discussed and explained in the research. Those factors are examined through combining the Diffusion of Innovation theory (DOI) and technology-organization-environment framework (TOE). Data were collected through a survey that was developed using constructs from the existing studies of cloud computing and cloud ERP technologies and was then modified to fit our research. The analysis of the data was based on Structural Equation Modeling (SEM) using Smart PLS software that was used for the empirical analysis of the research model.

Spatial Indeterminacy: Destabilization of Dichotomies in Modern and Contemporary Architecture

Since the advent of modern architecture, notions of free plan and transparency have proliferated well into current trends. The movement’s notion of a spatially homogeneous, open and limitless ‘free plan’ contrasts with the spatially heterogeneous ‘series of rooms’ defined by load bearing walls, which in turn triggered new notions of transparency created by vast expanses of glazed walls. Similarly, transparency was also dichotomized as something that was physical or optical, as well as something conceptual, akin to spatial organization. As opposed to merely accepting the duality and possible incompatibility of these dichotomies, this paper seeks to ask how can space be both literally and phenomenally transparent, as well as exhibit both homogeneous and heterogeneous qualities? This paper explores this potential destabilization or blurring of spatial phenomena by dissecting the transparent layers and volumes of a series of selected case studies to investigate how different architects have devised strategies of spatial ambiguity and interpenetration. Projects by Peter Eisenman, Sou Fujimoto, and SANAA will be discussed and analyzed to show how the superimposition of geometries and spaces achieve different conditions of layering, transparency, and interstitiality. Their particular buildings will be explored to reveal various innovative kinds of spatial interpenetration produced through the articulate relations of the elements of architecture, which challenge conventional perceptions of interior and exterior whereby visual homogeneity blurs with spatial heterogeneity. The results show how spatial conceptions such as interpenetration and transparency have the ability to subvert not only inside-outside dialectics, but could also produce multiple degrees of interiority within complex and indeterminate spatial dimensions in constant flux as well as present alternative forms of social interaction.

Finite Element Modelling of Log Wall Corner Joints

The paper presents outcomes of the numerical research performed on standard and dovetail corner joints under lateral loads. An overview of the past research on log shear walls is also presented. To the authors’ best knowledge, currently, there are no specific design guidelines available in the code for the design of log shear walls, implying the need to investigate the performance of log shear walls. This research explores the performance of the log shear wall corner joint system of standard joint and dovetail types using numerical methods based on research available in the literature. A parametric study is performed to study the effect of gap size provided between two orthogonal logs and the presence of wood and steel dowels provided as joinery between log courses on the performance of such a structural system. The research outcomes are the force-displacement curves. Variability of 8% is seen in the reaction forces with the change of gap size for the case of the standard joint, while a variation of 10% is observed in the reaction forces for the dovetail joint system.

Comparative Study of Affricate Initial Consonants in Chinese and Slovak

The purpose of the comparative study of the affricate consonants in Chinese and Slovak is to increase the awareness of the main distinguishing features between these two languages taking into consideration this particular group of consonants. We determine the main difficulties of the Slovak learners in the process of acquiring correct pronunciation of affricate initial consonants in Chinese based on the understanding of the distinguishing features of Chinese and Slovak affricates in combination with the experimental measuring of voice onset time (VOT) values. The software tool Praat is used for the analysis of the recorded language samples. The language samples contain recordings of a Chinese native speaker and Slovak students of Chinese with different language proficiency levels. Based on the results of the analysis in Praat, we identify erroneous pronunciation and provide clarification of its cause.

Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

A Numerical Study of the Interaction between Residual Stress Profiles Induced by Quasi-Static Plastification

One of the most relevant phenomena in manufacturing is the residual stress state development through the manufacturing chain. In most cases, the residual stresses have their origin in the heterogenous plastification produced by the processes. Although a few manufacturing processes have been successfully approached by numerical modeling, there is still lack of understanding on how these processes' interactions will affect the final stress state. The objective of this work is to analyze the effect of the grinding procedure on the residual stress state generated by a quasi-static indentation. The model consists in a simplified approach of shot peening, modeling four cases with variations in indenter size and force. This model was validated through topography, measured by optical 3D focus-variation. The indentation model configured with two loads was then exposed to two grinding procedures and the result was analyzed. It was observed that the grinding procedure will have a significant effect on the stress state.

An Approach to Capture, Evaluate and Handle Complexity of Engineering Change Occurrences in New Product Development

This paper represents the conception that complex problems do not necessary need similar complex solutions in order to cope with the complexity. Furthermore, a simple solution based on established methods can provide a sufficient way dealing with the complexity. To verify this conception, the presented paper focuses on the field of change management as a part of new product development process in automotive sector. In the field of complexity management, dealing with increasing complexity is essential, while, only non-flexible rigid processes that are not designed to handle complexity are available. The basic methodology of this paper can be divided in four main sections: 1) analyzing the complexity of the change management, 2) literature review in order to identify potential solutions and methods, 3) capturing and implementing expertise of experts from change management filed of an automobile manufacturing company and 4) systematical comparison of the identified methods from literature and connecting these with defined requirements of the complexity of the change management in order to develop a solution. As a practical outcome, this paper provides a method to capture the complexity of engineering changes (EC) and includes it within the EC evaluation process, following case-related process guidance to cope with the complexity. Furthermore, this approach supports the conception that dealing with complexity is possible while utilizing rather simple and established methods by combining them in to a powerful tool.

Physicochemical and Thermal Characterization of Starch from Three Different Plantain Cultivars in Puerto Rico

Plantain contains starch as the main component and represents a relevant source of this carbohydrate. Starches from different cultivars of plantain and bananas have been studied for industrialization purposes due to their morphological and thermal characteristics and their influence in food products. This study aimed to characterize the physical, chemical, and thermal properties of starch from three different plantain cultivated in Puerto Rico: Maricongo, Maiden and FHIA 20. Amylose and amylopectin content, color, granular size, morphology, and thermal properties were determined. According to the amylose content in starches, FHIA 20 presented lowest content of the three cultivars studied. In terms of color, Maiden and FHIA 20 starches exhibited significantly higher whiteness indexes compared to Maricongo starch. Starches of the three cultivars had an elongated-ovoid morphology, with a smooth surface and a non-porous appearance. Regardless of similarities in their morphology, FHIA 20 exhibited a lower aspect ratio since its granules tended to be more elongated. Comparison of the thermal properties of starches showed that initial starch gelatinization temperature was similar among cultivars. However, FHIA 20 starch presented a noticeably higher final gelatinization temperature (87.95°C) and transition enthalpy than Maricongo (79.69°C) and Maiden (77.40°C). Despite similarities, starches from plantain cultivars showed differences in their composition and thermal behavior. This represents an opportunity to diversify plantain starch use in food-related applications.

OILU Tag: A Projective Invariant Fiducial System

This paper presents the development of a 2D visual marker, derived from a recent patented work in the field of numbering systems. The proposed fiducial uses a group of projective invariant straight-line patterns, easily detectable and remotely recognizable. Based on an efficient data coding scheme, the developed marker enables producing a large panel of unique real time identifiers with highly distinguishable patterns. The proposed marker Incorporates simultaneously decimal and binary information, making it readable by both humans and machines. This important feature opens up new opportunities for the development of efficient visual human-machine communication and monitoring protocols. Extensive experiment tests validate the robustness of the marker against acquisition and geometric distortions.

Threshold Concepts in TESOL: A Thematic Analysis of Disciplinary Guiding Principles

The notion of Threshold Concepts has offered a fertile new perspective on the transformative effects of mastery of particular concepts on student understanding of subject matter and their developing identities as inductees into disciplinary discourse communities. Only by successfully traversing essential knowledge thresholds can neophytes achieve the more sophisticated understandings of subject matter possessed by mature members of a discipline. This paper uses thematic analysis of disciplinary guiding principles to identify nine candidate Threshold Concepts that appear to underpin effective TESOL practice. The relationship between these candidate TESOL Threshold Concepts, TESOL principles, and TESOL instructional techniques appears to be amenable to a schematic representation based on superordinate categories of TESOL practitioner concern and, as such, offers an alternative to the view of Threshold Concepts as a privileged subset of disciplinary core concepts. The paper concludes by exploring the potential of a Threshold Concepts framework to productively inform TESOL initial teacher education (ITE) and in-service education and training (INSET).

Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Cybersecurity for Digital Twins in the Built Environment: Research Landscape, Industry Attitudes and Future Direction

Technological advances in the construction sector are helping to make smart cities a reality by means of Cyber-Physical Systems (CPS). CPS integrate information and the physical world through the use of Information Communication Technologies (ICT). An increasingly common goal in the built environment is to integrate Building Information Models (BIM) with Internet of Things (IoT) and sensor technologies using CPS. Future advances could see the adoption of digital twins, creating new opportunities for CPS using monitoring, simulation and optimisation technologies. However, researchers often fail to fully consider the security implications. To date, it is not widely possible to assimilate BIM data and cybersecurity concepts and, therefore, security has thus far been overlooked. This paper reviews the empirical literature concerning IoT applications in the built environment and discusses real-world applications of the IoT intended to enhance construction practices, people’s lives and bolster cybersecurity. Specifically, this research addresses two research questions: (a) How suitable are the current IoT and CPS security stacks to address the cybersecurity threats facing digital twins in the context of smart buildings and districts? and (b) What are the current obstacles to tackling cybersecurity threats to the built environment CPS? To answer these questions, this paper reviews the current state-of-the-art research concerning digital twins in the built environment, the IoT, BIM, urban cities and cybersecurity. The results of the findings of this study confirmed the importance of using digital twins in both IoT and BIM. Also, eight reference zones across Europe have gained special recognition for their contributions to the advancement of IoT science. Therefore, this paper evaluates the use of digital twins in CPS to arrive at recommendations for expanding BIM specifications to facilitate IoT compliance, bolster cybersecurity and integrate digital twin and city standards in the smart cities of the future.

Battery Grading Algorithm in 2nd-Life Repurposing Li-ion Battery System

This article presents a methodology that improves reliability and cyclability of 2nd-life Li-ion battery system repurposed as energy storage system (ESS). Most of the 2nd-life retired battery systems in market have module/pack-level state of health (SOH) indicator, which is utilized for guiding appropriate depth of discharge (DOD) in the application of ESS. Due to the lack of cell-level SOH indication, the different degrading behaviors among various cells cannot be identified upon reaching retired status; in the end, considering end of life (EOL) loss and pack-level DOD, the repurposed ESS has to be oversized by > 1.5 times to complement the application requirement of reliability and cyclability. This proposed battery grading algorithm, using non-invasive methodology, is able to detect outlier cells based on historical voltage data and calculate cell-level historical maximum temperature data using semi-analytic methodology. In this way, the individual battery cell in the 2nd-life battery system can be graded in terms of SOH on basis of the historical voltage fluctuation and estimated historical maximum temperature variation. These grades will have corresponding DOD grades in the application of the repurposed ESS to enhance the system reliability and cyclability. In all, this introduced battery grading algorithm is non-invasive, compatible with all kinds of retired Li-ion battery systems which lack of cell-level SOH indication, as well as potentially being embedded into battery management software for preventive maintenance and real-time cyclability optimization.

Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5

Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contain four obvious stages and the main decomposition reaction occurred in the range of 200-600 °C. Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2 and 3 were in the range of 6.67-20.37 kJ/mol for SS; 1.51-6.87 kJ/mol for HZSM5; and 2.29-9.17 kJ/mol for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1 and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with HZSM5 and AC were in the total range of C4-C17 with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds while the presence of HZSM5 and AC dropped to 7.3% and 13.02%, respectively. Meanwhile, generation of value-added chemicals such as light aromatic compounds were significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR and TGA techniques. Overall, this research demonstrated that AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.

Predicting the Lack of GDP Growth: A Logit Model for 40 Advanced and Developing Countries

This paper identifies leading triggers of deficient episodes in terms of GDP growth based on a sample of countries at different stages of development over 1994-2017. Using logit models, we build early warning systems (EWS) and our results show important differences between developing countries (DCs) and advanced economies (AEs). For AEs, the main predictors of the probability of entering in a GDP growth deficient episode are the deterioration of external imbalances and the vulnerability of fiscal position while DCs face different challenges that need to be considered. The key indicators for them are first, the low ability to pay its debts and second, their belonging or not to a common currency area. We also build homogeneous pools of countries inside AEs and DCs. For AEs, the evolution of the proportion of countries in the riskiest pool is marked first, by three distinct peaks just after the high-tech bubble burst, the global financial crisis and the European sovereign debt crisis, and second by a very low minimum level in 2006 and 2007. In contrast, the situation of DCs is characterized first by a relative stability of this proportion and then by an upward trend from 2006, that can be explained by more unfavorable socio-political environment leading to shortcomings in the fiscal consolidation.

Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station

Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.

Scientific Methods in Educational Management: The Metasystems Perspective

Although scientific methods have been the subject of a large number of papers, the term ‘scientific methods in educational management’ is still not well defined. In this paper, it is adopted the metasystems perspective to define the mentioned term and distinguish them from methods used in time of the scientific management and knowledge management paradigms. In our opinion, scientific methods in educational management rely on global phenomena, events, and processes and their influence on the educational organization. Currently, scientific methods in educational management are integrated with the phenomenon of globalization, cognitivisation, and openness, etc. of educational systems and with global events like the COVID-19 pandemic. Concrete scientific methods are nested in a hierarchy of more and more abstract models of educational management, which form the context of the global impact on education, in general, and learning outcomes, in particular. However, scientific methods can be assigned to a specific mission, strategy, or tactics of educational management of the concrete organization, either by the global management, local development of school organization, or/and development of the life-long successful learner. By accepting this assignment, the scientific method becomes a personal goal of each individual with the educational organization or the option to develop the educational organization at the global standards. In our opinion, in educational management, the scientific methods need to confine the scope to the deep analysis of concrete tasks of the educational system (i.e., teaching, learning, assessment, development), which result in concrete strategies of organizational development. More important are seeking the ways for dynamic equilibrium between the strategy and tactic of the planetary tasks in the field of global education, which result in a need for ecological methods of learning and communication. In sum, distinction between local and global scientific methods is dependent on the subjective conception of the task assignment, measurement, and appraisal. Finally, we conclude that scientific methods are not holistic scientific methods, but the strategy and tactics implemented in the global context by an effective educational/academic manager.