Applying the Integrative Design Process in Architectural Firms: An Analytical Study on Egyptian Firms

An architect carrying the design process alone is the main reason for the deterioration of the quality of the architectural product as the complexity of the projects makes it a multi-disciplinary work; then, the Integrative Design Process (IDP) must be applied in the architectural firm especially from the early design phases to improve the product’s quality and to eliminate the ignorance of the principles of design causing the occurrence of low-grade buildings. The research explores the Integrative Design (ID) principles that fit in the architectural practice. Constraints facing this application are presented with strategies and solutions to overcome them. A survey questionnaire was conducted to collect data from a number of recognized Egyptian Architecture, Engineering and Construction (AEC) firms that explores their opinions on using the IDP. This survey emphasizes the importance of the IDP in firms and presents the reasons preventing the firms from applying the IDP. The aim here is to investigate the potentials of integrating this approach into architectural firms emphasizing the importance of this application which ensures the realization of the project’s goal and eliminates the reduction in the project’s quality.

Utilization of Process Mapping Tool to Enhance Production Drilling in Underground Metal Mining Operations

Underground mining is at the core of rapidly evolving metals and minerals sector due to the increasing mineral consumption globally. Even though the surface mines are still more abundant on earth, the scales of industry are slowly tipping towards underground mining due to rising depth and complexities of orebodies. Thus, the efficient and productive functioning of underground operations depends significantly on the synchronized performance of key elements such as operating site, mining equipment, manpower and mine services. Production drilling is the process of conducting long hole drilling for the purpose of charging and blasting these holes for the production of ore in underground metal mines. Thus, production drilling is the crucial segment in the underground metal mining value chain. This paper presents the process mapping tool to evaluate the production drilling process in the underground metal mining operation by dividing the given process into three segments namely Input, Process and Output. The three segments are further segregated into factors and sub-factors. As per the study, the major input factors crucial for the efficient functioning of production drilling process are power, drilling water, geotechnical support of the drilling site, skilled drilling operators, services installation crew, oils and drill accessories for drilling machine, survey markings at drill site, proper housekeeping, regular maintenance of drill machine, suitable transportation for reaching the drilling site and finally proper ventilation. The major outputs for the production drilling process are ore, waste as a result of dilution, timely reporting and investigation of unsafe practices, optimized process time and finally well fragmented blasted material within specifications set by the mining company. The paper also exhibits the drilling loss matrix, which is utilized to appraise the loss in planned production meters per day in a mine on account of availability loss in the machine due to breakdowns, underutilization of the machine and productivity loss in the machine measured in drilling meters per unit of percussion hour with respect to its planned productivity for the day. The given three losses would be essential to detect the bottlenecks in the process map of production drilling operation so as to instigate the action plan to suppress or prevent the causes leading to the operational performance deficiency. The given tool is beneficial to mine management to focus on the critical factors negatively impacting the production drilling operation and design necessary operational and maintenance strategies to mitigate them. 

High Performance Electrocardiogram Steganography Based on Fast Discrete Cosine Transform

Based on fast discrete cosine transform (FDCT), the authors present a high capacity and high perceived quality method for electrocardiogram (ECG) signal. By using a simple adjusting policy to the 1-dimentional (1-D) DCT coefficients, a large volume of secret message can be effectively embedded in an ECG host signal and be successfully extracted at the intended receiver. Simulations confirmed that the resulting perceived quality is good, while the hiding capability of the proposed method significantly outperforms that of existing techniques. In addition, our proposed method has a certain degree of robustness. Since the computational complexity is low, it is feasible for our method being employed in real-time applications.

Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration

Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.

Sensitivity Analysis of External-Rotor Permanent Magnet Assisted Synchronous Reluctance Motor

In this paper, a proper approach is taken to assess a set of the most effective rotor design parameters for an external-rotor permanent magnet assisted synchronous reluctance motor (PMaSynRM) and therefore to tackle the design complexity of the rotor structure. There are different advantages for introducing permanent magnets into the rotor flux barriers, some of which are to saturate the rotor iron ribs, to increase the motor torque density and to improve the power factor. Moreover, the d-axis and q-axis inductances are of great importance to simultaneously achieve maximum developed torque and low torque ripple. Therefore, sensitivity analysis of the rotor geometry of an 8-pole external-rotor permanent magnet assisted synchronous reluctance motor is performed. Several magnetically accurate finite element analyses (FEA) are conducted to characterize the electromagnetic performance of the motor. The analyses validate torque and power factor equations for the proposed external-rotor motor. Based upon the obtained results and due to an additional term, permanent magnet torque, added to the reluctance torque, the electromagnetic torque of the PMaSynRM increases.

Tools for Analysis and Optimization of Standalone Green Microgrids

Green microgrids using mostly renewable energy (RE) for generation, are complex systems with inherent nonlinear dynamics. Among a variety of different optimization tools there are only a few ones that adequately consider this complexity. This paper evaluates applicability of two somewhat similar optimization tools tailored for standalone RE microgrids and also assesses a machine learning tool for performance prediction that can enhance the reliability of any chosen optimization tool. It shows that one of these microgrid optimization tools has certain advantages over another and presents a detailed routine of preparing input data to simulate RE microgrid behavior. The paper also shows how neural-network-based predictive modeling can be used to validate and forecast solar power generation based on weather time series data, which improves the overall quality of standalone RE microgrid analysis.

Web Proxy Detection via Bipartite Graphs and One-Mode Projections

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability

Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.

Statistical Analysis of Failure Cases in Aerospace

The major concern in the aviation industry is the flight safety. Although great effort has been put onto the development of material and system reliability, the failure cases of fatal accidents still occur nowadays. Due to the complexity of the aviation system, and the interaction among the failure components, the failure analysis of the related equipment is a little difficult. This study focuses on surveying the failure cases in aviation, which are extracted from failure analysis journals, including Engineering Failure Analysis and Case studies in Engineering Failure Analysis, in order to obtain the failure sensitive factors or failure sensitive parts. The analytical results show that, among the failure cases, fatigue failure is the largest in number of occurrence. The most failed components are the disk, blade, landing gear, bearing, and fastener. The frequently failed materials consist of steel, aluminum alloy, superalloy, and titanium alloy. Therefore, in order to assure the safety in aviation, more attention should be paid to the fatigue failures.

Attribute Based Comparison and Selection of Modular Self-Reconfigurable Robot Using Multiple Attribute Decision Making Approach

From the last decades, there is a significant technological advancement in the field of robotics, and a number of modular self-reconfigurable robots were introduced that can help in space exploration, bucket to stuff, search, and rescue operation during earthquake, etc. As there are numbers of self-reconfigurable robots, choosing the optimum one is always a concern for robot user since there is an increase in available features, facilities, complexity, etc. The objective of this research work is to present a multiple attribute decision making based methodology for coding, evaluation, comparison ranking and selection of modular self-reconfigurable robots using a technique for order preferences by similarity to ideal solution approach. However, 86 attributes that affect the structure and performance are identified. A database for modular self-reconfigurable robot on the basis of different pertinent attribute is generated. This database is very useful for the user, for selecting a robot that suits their operational needs. Two visual methods namely linear graph and spider chart are proposed for ranking of modular self-reconfigurable robots. Using five robots (Atron, Smores, Polybot, M-Tran 3, Superbot), an example is illustrated, and raking of the robots is successfully done, which shows that Smores is the best robot for the operational need illustrated, and this methodology is found to be very effective and simple to use.

Sustainable Intensification of Agriculture in Victoria’s Food Bowl: Optimizing Productivity with the use of Decision-Support Tools

A participatory and engaged approach is key in connecting agricultural managers to sustainable agricultural systems to support and optimize production in Victoria’s food bowl. A sustainable intensification (SI) approach is well documented globally, but participation rates amongst Victorian farmers is fragmentary, and key outcomes and implementation strategies are poorly understood. Improvement in decision-support management tools and a greater understanding of the productivity gains available upon implementation of SI is necessary. This paper reviews the current understanding and uptake of SI practices amongst farmers in one of Victoria’s premier food producing regions, the Goulburn Broken; and it spatially analyses the potential for this region to adapt to climate change and optimize food production. A Geographical Information Systems (GIS) approach is taken to develop an interactive decision-support tool that can be accessible to on-ground agricultural managers. The tool encompasses multiple criteria analysis (MCA) that identifies factors during the construction phase of the tool, using expert witnesses and regional knowledge, framed within an Analytical Hierarchy Process. Given the complexities of the interrelations between each of the key outcomes, this participatory approach, in which local realities and factors inform the key outcomes and help to strategies for a particular region, results in a robust strategy for sustainably intensifying production in key food producing regions. The creation of an interactive, locally embedded, decision-support management and education tool can help to close the gap between farmer knowledge and production, increase on-farm adoption of sustainable farming strategies and techniques, and optimize farm productivity.

Influence of Shading on a BIPV System’s Performance in an Urban Context: Case Study of BIPV Systems of the Science Center of Complexity Building of the National and Autonomous University of Mexico in Mexico City

The purpose of this paper is to establish the influence of shading on a Building Integrated Photovoltaic (BIPV) system´s performance in an urban context. The PV systems of the Science Center of Complexity (Centro de Ciencias de la Complejidad) Building based in the Main Campus of the National and Autonomous University of Mexico (UNAM) in Mexico City was taken as case study.  The PV systems are placed on the rooftop and on the south façade of the building.  The south-façade PV system, operating as sunshades, consists of two strings:  one at the ground floor and the other one at the first floor.  According to the building’s facility manager, the south-façade PV system generates 42% less electricity per kilowatt peak (kWp) installed than the one on the roof.  The methods applied in this study were Solar Radiation Analysis (SRA) simulations performed with the Insight 360 Plug-in from Revit 2018® and an on-site measurement using specialized tools.  The results of the SRA simulations showed that the shading casted by the PV system placed on the first floor on top of the PV system of the ground floor decreases its solar incident radiation over 50%.  The simulation outcome was compared and validated to the measured data obtained from the on-site measurement.  In conclusion, the loss factor achieved from the shading of the PVs is due to the surroundings and the PV system´s own design.  The south-façade BIPV system’s deficient design generates critical losses on its performance and decreases its profitability.

4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Commercialization of Technologies, Productivity and Problems of Technological Audit in the Russian Economy

The problems of technological development for the Russian Federation take on special significance in the context of modernization of the production base. The complexity of the position of the Russian economy is that it cannot be attributed fully to developing ones. Russia is a strong industrial power that has gone through the processes of destructive de-industrialization in the conditions of changing its economic and political structure. The need to find ways for re-industrialization is not a unique task for the economies of industrially developed countries. Under the influence of production outsourcing for 20 years, the industrial potential of leading economies of the world was regressed against the backdrop of the ascent of China, a new industrial giant. Therefore, methods, tools, and techniques utilized for industrial renaissance in EU may be used to achieve a technological leap in the Russian Federation, especially since the temporary gap of 5-7 years makes it possible to analyze best practices and use those technological transfer tools that have shown the greatest efficiency. In this article, methods of technological transfer are analyzed, the role of technological audit is justified, and factors are analyzed that influence the successful process of commercialization of technologies.

Communication Design in Newspapers: A Comparative Study of Graphic Resources in Portuguese and Spanish Publications

As a way of managing the increasing volume and complexity of information that circulates in the present time, graphical representations are increasingly used, which add meaning to the information presented in communication media, through an efficient communication design. The visual culture itself, driven by technological evolution, has been redefining the forms of communication, so that contemporary visual communication represents a major impact on society. This article presents the results and respective comparative analysis of four publications in the Iberian press, focusing on the formal aspects of newspapers and the space they dedicate to the various communication elements. Two Portuguese newspapers and two Spanish newspapers were selected for this purpose. The findings indicated that the newspapers show a similarity in the use of graphic solutions, which corroborate a visual trend in communication design. The results also reveal that Spanish newspapers are more meticulous with graphic consistency. This study intended to contribute to improving knowledge of the Iberian generalist press.

Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier

In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.

Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece

The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.

A Query Optimization Strategy for Autonomous Distributed Database Systems

Distributed database is a collection of logically related databases that cooperate in a transparent manner. Query processing uses a communication network for transmitting data between sites. It refers to one of the challenges in the database world. The development of sophisticated query optimization technology is the reason for the commercial success of database systems, which complexity and cost increase with increasing number of relations in the query. Mariposa, query trading and query trading with processing task-trading strategies developed for autonomous distributed database systems, but they cause high optimization cost because of involvement of all nodes in generating an optimal plan. In this paper, we proposed a modification on the autonomous strategy K-QTPT that make the seller’s nodes with the lowest cost have gradually high priorities to reduce the optimization time. We implement our proposed strategy and present the results and analysis based on those results.

CompleX-Machine: An Automated Testing Tool Using X-Machine Theory

This paper is aimed at creating an Automatic Java X-Machine testing tool for software development. The nature of software development is changing; thus, the type of software testing tools required is also changing. Software is growing increasingly complex and, in part due to commercial impetus for faster software releases with new features and value, increasingly in danger of containing faults. These faults can incur huge cost for software development organisations and users; Cambridge Judge Business School’s research estimated the cost of software bugs to the global economy is $312 billion. Beyond the cost, faster software development methodologies and increasing expectations on developers to become testers is driving demand for faster, automated, and effective tools to prevent potential faults as early as possible in the software development lifecycle. Using X-Machine theory, this paper will explore a new tool to address software complexity, changing expectations on developers, faster development pressures and methodologies, with a view to reducing the huge cost of fixing software bugs.

Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.