A Multiple Beam LTE Base Station Antenna with Simultaneous Vertical and Horizontal Sectorization

A low wind-load light-weight broad-band multi-beam base station antenna has been developed. It can generate any required number of beams with the required beamwidths. It can have horizontal and vertical sectorization at the same time. Vertical sectorization doubles the overall number of beams. It will be very valuable in LTE-A and 5G. It can be used to serve vertically split inner and outer cells, which improves system performance. The intersection between the beams of the proposed multi-beam antenna can be controlled by optimizing the design parameters of the antenna. The gain at the points of intersection between the beams, the null filling and the overlap between the beams can all be modified. The proposed multi-beam base station antenna can cover an unlimited number of wireless applications, regardless of their frequency bands. It can simultaneously cover all, current and future, wireless technology generations such as 2G, 3G, 4G (LTE), --- etc. For example, in LTE, it covers the bands 450-470 MHz, 690-960 MHz, 1.4-2.7 GHz and 3.3-3.8 GHz. It has at least 2 ports for each band in each beam for ±45° polarizations. It can include up to 72 ports or even more, which could facilitate any further needed capacity expansions.

Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming

To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.

Integration of Big Data to Predict Transportation for Smart Cities

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Search for Flavour Changing Neutral Current Couplings of Higgs-up Sector Quarks at Future Circular Collider (FCC-eh)

In the search for new physics beyond the Standard Model, Flavour Changing Neutral Current (FCNC) is a good research field in terms of the observability at future colliders. Increased Higgs production with higher energy and luminosity in colliders is essential for verification or falsification of our knowledge of physics and predictions, and the search for new physics. Prospective electron-proton collider constituent of the Future Circular Collider project is FCC-eh. It offers great sensitivity due to its high luminosity and low interference. In this work, thq FCNC interaction vertex with off-shell top quark decay at electron-proton colliders is studied. By using MadGraph5_aMC@NLO multi-purpose event generator, observability of tuh and tch couplings are obtained with equal coupling scenario. Upper limit on branching ratio of tree level top quark FCNC decay is determined as 0.012% at FCC-eh with 1 ab ^−1 luminosity.

Impact of Fluid Flow Patterns on Metastable Zone Width of Borax in Dual Radial Impeller Crystallizer at Different Impeller Spacings

Conducting crystallization in an agitated vessel requires a proper selection of mixing parameters that would result in a production of crystals of specific properties. In dual impeller systems, which are characterized by a more complex hydrodynamics due to the possible fluid flow interactions, revealing a clear link between mixing parameters and crystallization kinetics is still an open issue. The aim of this work is to establish this connection by investigating how fluid flow patterns, generated by two impellers mounted on the same shaft, reflect on metastable zone width of borax decahydrate, one of the most important parameters of the crystallization process. Investigation was carried out in a 15-dm3 bench scale batch cooling crystallizer with an aspect ratio (H/T) equal to 1.3. For this reason, two radial straight blade turbines (4-SBT) were used for agitation. Experiments were conducted at different impeller spacings at the state of complete suspension. During the process of an unseeded batch cooling crystallization, solution temperature and supersaturation were continuously monitored what enabled a determination of the metastable zone width. Hydrodynamic conditions in the vessel achieved at different impeller spacings investigated were analyzed in detail. This was done firstly by measuring the mixing time required to attain the desired level of homogeneity. Secondly, fluid flow patterns generated in a described dual impeller system were both photographed and simulated by VisiMix Turbulent software. Also, a comparison of these two visualization methods was performed. Experimentally obtained results showed that metastable zone width is definitely affected by the hydrodynamics in the crystallizer. This means that this crystallization parameter can be controlled not only by adjusting the saturation temperature or cooling rate, as is usually done, but also by choosing a suitable impeller spacing that will result in a formation of crystals of wanted size distribution.

A Real-Time Simulation Environment for Avionics Software Development and Qualification

The development of guidance, navigation and control algorithms and avionic procedures requires the disposability of suitable analysis and verification tools, such as simulation environments, which support the design process and allow detecting potential problems prior to the flight test, in order to make new technologies available at reduced cost, time and risk. This paper presents a simulation environment for avionic software development and qualification, especially aimed at equipment for general aviation aircrafts and unmanned aerial systems. The simulation environment includes models for short and medium-range radio-navigation aids, flight assistance systems, and ground control stations. All the software modules are able to simulate the modeled systems both in fast-time and real-time tests, and were implemented following component oriented modeling techniques and requirement based approach. The paper describes the specific models features, the architectures of the implemented software systems and its validation process. Performed validation tests highlighted the capability of the simulation environment to guarantee in real-time the required functionalities and performance of the simulated avionics systems, as well as to reproduce the interaction between these systems, thus permitting a realistic and reliable simulation of a complete mission scenario.

Tensile Properties of 3D Printed PLA under Unidirectional and Bidirectional Raster Angle: A Comparative Study

Fused deposition modeling (FDM) gains popularity in recent times, due to its capability to create prototype as well as functional end use product directly from CAD file. Parts fabricated using FDM process have mechanical properties comparable with those of injection-molded parts. However, performance of the FDM part is severally affected by the poor mechanical properties of the part due to nature of layered structure of printed part. Mechanical properties of the part can be improved by proper selection of process variables. In the present study, a comparative study between unidirectional and bidirectional raster angle has been carried out at a combination of different layer height and raster width. Unidirectional raster angle varied at five different levels, and bidirectional raster angle has been varied at three different levels. Fabrication of tensile specimen and tensile testing of specimen has been conducted according to ASTM D638 standard. From the results, it can be observed that higher tensile strength has been obtained at 0° raster angle followed by 45°/45° raster angle, while lower tensile strength has been obtained at 90° raster angle. Analysis of fractured surface revealed that failure takes place along with raster deposition direction for unidirectional and zigzag failure can be observed for bidirectional raster angle.

Performance Evaluation of Thermosiphon Based Solar Water Heater in India

This paper aims to study performance of a thermosiphon solar water heating system with the help of the proposed analytical model. This proposed model predicts the temperature and mass flow rate in a thermosiphon solar water heating system depending on radiation intensity and ambient temperature. The performance of the thermosiphon solar water heating system is evaluated in the Indian context. For this, eight cities in India are selected considering radiation intensity and geographical positions. Predicted performance at various cities reveals the potential for thermosiphon solar water in India.

Aerodynamic Coefficients Prediction from Minimum Computation Combinations Using OpenVSP Software

OpenVSP is an aerodynamic solver developed by National Aeronautics and Space Administration (NASA) that allows building a reliable model of an aircraft. This software performs an aerodynamic simulation according to the angle of attack of the aircraft makes between the incoming airstream, and its speed. A reliable aerodynamic model of the Cessna Citation X was designed but it required a lot of computation time. As a consequence, a prediction method was established that allowed predicting lift and drag coefficients for all Mach numbers and for all angles of attack, exclusively for stall conditions, from a computation of three angles of attack and only one Mach number. Aerodynamic coefficients given by the prediction method for a Cessna Citation X model were finally compared with aerodynamics coefficients obtained using a complete OpenVSP study.

The Excess Loop Delay Calibration in a Bandpass Continuous-Time Delta Sigma Modulators Based on Q-Enhanced LC Filter

The Q-enhanced LC filters are the most used architecture in the Bandpass (BP) Continuous-Time (CT) Delta-Sigma (ΣΔ) modulators, due to their: high frequencies operation, high linearity than the active filters and a high quality factor obtained by Q-enhanced technique. This technique consists of the use of a negative resistance that compensate the ohmic losses in the on-chip inductor. However, this technique introduces a zero in the filter transfer function which will affect the modulator performances in term of Dynamic Range (DR), stability and in-band noise (Signal-to-Noise Ratio (SNR)). In this paper, we study the effect of this zero and we demonstrate that a calibration of the excess loop delay (ELD) is required to ensure the best performances of the modulator. System level simulations are done for a 2ndorder BP CT (ΣΔ) modulator at a center frequency of 300MHz. Simulation results indicate that the optimal ELD should be reduced by 13% to achieve the maximum SNR and DR compared to the ideal LC-based ΣΔ modulator.

Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network

This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.

Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Electromagnetic Tuned Mass Damper Approach for Regenerative Suspension

This study is aimed at exploring the possibility of energy recovery through the suppression of vibrations. The article describes design of electromagnetic dynamic damper. The magnetic part of the device performs the function of a tuned mass damper, thereby providing both energy regeneration and damping properties to the protected mass. According to the theory of tuned mass damper, equations of mathematical models were obtained. Then, under given properties of current system, amplitude frequency response was investigated. Therefore, main ideas and methods for further research were defined.

Definition and Core Components of the Role-Partner Allocation Problem in Collaborative Networks

In the current constantly changing economic context, collaborative networks allow partners to undertake projects that would not be possible if attempted by them individually. These projects usually involve the performance of a group of tasks (named roles) that have to be distributed among the partners. Thus, an allocation/matching problem arises that will be referred to as Role-Partner Allocation problem. In real life this situation is addressed by negotiation between partners in order to reach ad hoc agreements. Besides taking a long time and being hard work, both historical evidence and economic analysis show that such approach is not recommended. Instead, the allocation process should be automated by means of a centralized matching scheme. However, as a preliminary step to start the search for such a matching mechanism (or even the development of a new one), the problem and its core components must be specified. To this end, this paper establishes (i) the definition of the problem and its constraints, (ii) the key features of the involved elements (i.e., roles and partners); and (iii) how to create preference lists both for roles and partners. Only this way it will be possible to conduct subsequent methodological research on the solution method.     

Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

The Effect of Reducing Superimposed Dead Load on the Lateral Seismic Deformations of Structures

The vast majority of the Middle East countries are prone to earthquakes. Despite that and from a seismic hazard point of view, the higher values of the superimposed dead load intensity of partitions and wearing materials of the constructed reinforced concrete slabs in these countries can increase the earthquake vulnerability of the structures. The primary objective of this paper is to investigate the effect of reducing superimposed dead load on the lateral seismic deformations of structures, the inter-story drifts and the seismic pounding damages. The study utilizes a group of three reinforced concrete structures at three different site conditions. These structures are assumed to be constructed in Nablus city of Palestine, and having superimposed dead load value as 1 kN/m2, 3 kN/m2, and 5 kN/m2, respectively. SAP2000 program, Version 18.1.1, is used to perform the response spectrum analysis to obtain the potential lateral seismic deformations of the studied models. Amazingly, the study points that, at the same site, superimposed dead load has a minor effect on the lateral deflections of the models. This, however, promotes the hypothesis that buildings failed during earthquakes mainly because they were not designed appropriately against gravity loads.

Structural Optimization Method for 3D Reinforced Concrete Building Structure with Shear Wall

In this paper, an optimization procedure is applied for 3D Reinforced concrete building structure with shear wall.  In the optimization problem, cross sections of beams, columns and shear wall dimensions are considered as design variables and the optimal cross sections can be derived to minimize the total cost of the structure. As for final design application, the most suitable sections are selected to satisfy ACI 318-14 code provision based on static linear analysis. The validity of the method is examined through numerical example of 15 storied 3D RC building with shear wall.  This optimization method is expected to assist in providing a useful reference in design early stage, and to be an effective and powerful tool for structural design of RC shear wall structures.

Characterization of Penicillin V Acid and Its Related Compounds by HPLC

Background: 'Penicillin V' is a narrow, bactericidal antibiotic of the beta-lactam family of the naturally occurring penicillin group. It is limited to infections due to the germs defined as sensitive. The objective of this work was to identify and to characterize Penicillin V acid and its related compounds by High-performance liquid chromatography (HPLC). Methods: Firstly phenoxymethylpenicillin was identified by an infrared absorption. The organoleptic characteristics, pH, and determination of water content were also studied. The dosage of Penicillin V acid active substance and the determination of its related compounds were carried on waters HPLC, equipped with a UV detector at 254 nm and Discovery HS C18 column (250 mm X 4.6 mm X 5 µm) which is maintained at room temperature. The flow rate was about 1 ml per min. A mixture of water, acetonitrile and acetic acid (65:35:01) was used as mobile phase for phenoxyacetic acid ‘impurity B' and a mixture of water, acetonitrile and acetic acid (650:150:5.75) for the assay and 4-hydroxypenicillin V 'impurity D'. Results: The identification of Penicillin V acid active substance and the evaluation of its chemical quality showed conformity with USP 35th edition. The Penicillin V acid content in the raw material is equal to 1692.22 UI/mg. The percentage content of phenoxyacetic acid and 4-hydroxypenicillin V was respectively: 0.035% and 0.323%. Conclusion: Through these results, we can conclude that the Penicillin V acid active substance tested is of good physicochemical quality.

Comparison of the H-Index of Researchers of Google Scholar and Scopus

H-index has been widely used as a performance indicator of researchers around the world especially in Indonesia. The Government uses Scopus and Google scholar as indexing references in providing recognition and appreciation. However, those two indexing services yield to different H-index values. For that purpose, this paper evaluates the difference of the H-index from those services. Researchers indexed by Webometrics, are used as reference’s data in this paper. Currently, Webometrics only uses H-index from Google Scholar. This paper observed and compared corresponding researchers’ data from Scopus to get their H-index score. Subsequently, some researchers with huge differences in score are observed in more detail on their paper’s publisher. This paper shows that the H-index of researchers in Google Scholar is approximately 2.45 times of their Scopus H-Index. Most difference exists due to the existence of uncertified publishers, which is considered in Google Scholar but not in Scopus.

Power and Wear Reduction Using Composite Links of Crank-Rocker Mechanism with Optimum Transmission Angle

Reducing energy consumption became the major concern for all countries of the world during the recent decades. In general, power saving is currently the nominal goal of most industrial countries. It is well known that fossil fuels are the main pillar of development of world countries. Unfortunately, the increased rate of fossil fuel consumption will lead to serious problems caused by an expected depletion of fuels. Moreover, dangerous gases and vapors emission lead to severe environmental problems during fuel burning. Consequently, most engineering sectors especially the mechanical sectors are looking for improving any machine accompanied by reducing its energy consumption. Crank-Rocker planar mechanism is the most applied in mechanical systems. Besides, it is one of the most significant parts of the machines for obtaining the oscillatory motion. The transmission angle of this mechanism can be considered as an optimum value when its extreme values are equally varied around 90°. In addition, the transmission angle plays an important role in decreasing the required driving power and improving the dynamic properties of the mechanism. Hence, appropriate selection of mechanism links lengthens, which assures optimum transmission angle leads to decreasing the driving power. Moreover, mechanism's links manufactured from composite materials afford link's lightweight, which decreases the required driving torque. Furthermore, wear and corrosion problems can be treated through using composite links instead of using metal ones. This paper is dealing with improving the performance of crank-rocker mechanism using composite links due to their flexural elastic modulus values and stiffness in addition to high damping of composite materials.