Aerodynamic Coefficients Prediction from Minimum Computation Combinations Using OpenVSP Software

OpenVSP is an aerodynamic solver developed by National Aeronautics and Space Administration (NASA) that allows building a reliable model of an aircraft. This software performs an aerodynamic simulation according to the angle of attack of the aircraft makes between the incoming airstream, and its speed. A reliable aerodynamic model of the Cessna Citation X was designed but it required a lot of computation time. As a consequence, a prediction method was established that allowed predicting lift and drag coefficients for all Mach numbers and for all angles of attack, exclusively for stall conditions, from a computation of three angles of attack and only one Mach number. Aerodynamic coefficients given by the prediction method for a Cessna Citation X model were finally compared with aerodynamics coefficients obtained using a complete OpenVSP study.

Towards a Deconstructive Text: Beyond Language and the Politics of Absences in Samuel Beckett’s Waiting for Godot

The writing of Samuel Beckett is associated with meaning in the meaninglessness and the production of what he calls ‘literature of unword’. The casual escape from the world of words in the form of silences and pauses, in his play Waiting for Godot, urges to ask question of their existence and ultimately leads to investigate the theory behind their use in the play. This paper proposes that these absences (silence and pause) in Beckett’s play force to think ‘beyond’ language. This paper asks how silence and pause in Beckett’s text speak for the emergence of poststructuralist text. It aims to identify the significant features of the philosophy of deconstruction in the play of Beckett to demystify the hostile complicity between literature and philosophy. With the interpretive paradigm of poststructuralism this research focuses on the text as a research data. It attempts to delineate the relationship between poststructuralist theoretical concerns and text of Beckett. Keeping in view the theoretical concerns of Poststructuralist theorist Jacques Derrida, the main concern of the discussion is directed towards the notion of ‘beyond’ language into the absences that are aimed at silencing the existing discourse with the ‘radical irony’ of this anti-formal art that contains its own denial and thus represents the idea of ceaseless questioning and radical contradiction in art and any text. This article asks how text of Beckett vibrates with loud silence and has disrupted language to demonstrate the emptiness of words and thus exploring the limitless void of absences. Beckett’s text resonates with silence and pause that is neither negation nor affirmation rather a poststructuralist’s suspension of reality that is ever changing with the undecidablity of all meanings. Within the theoretical notion of Derrida’s Différance this study interprets silence and pause in Beckett’s art. The silence and pause behave like Derrida’s Différance and have questioned their own existence in the text to deconstruct any definiteness and finality of reality to extend an undecidable threshold of poststructuralists that aims to evade the ‘labyrinth of language’.

The Excess Loop Delay Calibration in a Bandpass Continuous-Time Delta Sigma Modulators Based on Q-Enhanced LC Filter

The Q-enhanced LC filters are the most used architecture in the Bandpass (BP) Continuous-Time (CT) Delta-Sigma (ΣΔ) modulators, due to their: high frequencies operation, high linearity than the active filters and a high quality factor obtained by Q-enhanced technique. This technique consists of the use of a negative resistance that compensate the ohmic losses in the on-chip inductor. However, this technique introduces a zero in the filter transfer function which will affect the modulator performances in term of Dynamic Range (DR), stability and in-band noise (Signal-to-Noise Ratio (SNR)). In this paper, we study the effect of this zero and we demonstrate that a calibration of the excess loop delay (ELD) is required to ensure the best performances of the modulator. System level simulations are done for a 2ndorder BP CT (ΣΔ) modulator at a center frequency of 300MHz. Simulation results indicate that the optimal ELD should be reduced by 13% to achieve the maximum SNR and DR compared to the ideal LC-based ΣΔ modulator.

Bluetooth Piconet System for Child Care Applications

This study mainly concerns a safety device designed for child care. When children are out of sight or the caregivers cannot always pay attention to the situation, through the functions of this device, caregivers can immediately be informed to make sure that the children do not get lost or hurt, and thus, ensure their safety. Starting from this concept, a device is produced based on the relatively low-cost Bluetooth piconet system and a three-axis gyroscope sensor. This device can transmit data to a mobile phone app through Bluetooth, in order that the user can learn the situation at any time. By simply clipping the device in a pocket or on the waist, after switching on/starting the device, it will send data to the phone to detect the child’s fall and distance. Once the child is beyond the angle or distance set by the app, it will issue a warning to inform the phone owner.

Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

The Influence of Fashion Bloggers on the Pre-Purchase Decision for Online Fashion Products among Generation Y Female Malaysian Consumers

This study explores how fashion consumers are influenced by fashion bloggers towards pre-purchase decision for online fashion products in a non-Western context. Malaysians rank among the world’s most avid online shoppers, with apparel the third most popular purchase category. However, extant research on fashion blogging focuses on the developed Western market context. Numerous international fashion retailers have entered the Malaysian market from luxury to fast fashion segments of the market; however Malaysian fashion consumers must balance religious and social norms for modesty with their dress style and adoption of fashion trends. Consumers increasingly mix and match Islamic and Western elements of dress to create new styles enabling them to follow Western fashion trends whilst paying respect to social and religious norms. Social media have revolutionised the way that consumers can search for and find information about fashion products. For online fashion brands with no physical presence, social media provide a means of discovery for consumers. By allowing the creation and exchange of user-generated content (UGC) online, they provide a public forum that gives individual consumers their own voices, as well as access to product information that facilitates their purchase decisions. Social media empower consumers and brands have important roles in facilitating conversations among consumers and themselves, to help consumers connect with them and one another. Fashion blogs have become an important fashion information sources. By sharing their personal style and inspiring their followers with what they wear on popular social media platforms such as Instagram, fashion bloggers have become fashion opinion leaders. By creating UGC to spread useful information to their followers, they influence the pre-purchase decision. Hence, successful Western fashion bloggers such as Chiara Ferragni may earn millions of US dollars every year, and some have created their own fashion ranges and beauty products, become judges in fashion reality shows, won awards, and collaborated with high street and luxury brands. As fashion blogging has become more established worldwide, increasing numbers of fashion bloggers have emerged from non-Western backgrounds to promote Islamic fashion styles, such as Hassanah El-Yacoubi and Dian Pelangi. This study adopts a qualitative approach using netnographic content analysis of consumer comments on two famous Malaysian fashion bloggers’ Instagram accounts during January-March 2016 and qualitative interviews with 16 Malaysian Generation Y fashion consumers during September-October 2016. Netnography adapts ethnographic techniques to the study of online communities or computer-mediated communications. Template analysis of the data involved coding comments according to the theoretical framework, which was developed from the literature review. Initial data analysis shows the strong influence of Malaysian fashion bloggers on their followers in terms of lifestyle and morals as well as fashion style. Followers were guided towards the mix and match trend of dress with Western and Islamic elements, for example, showing how vivid colours or accessories could be worked into an outfit whilst still respecting social and religious norms. The blogger’s Instagram account is a form of online community where followers can communicate and gain guidance and support from other followers, as well as from the blogger.

Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network

This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.

Studies on Properties of Knowledge Dependency and Reduction Algorithm in Tolerance Rough Set Model

Relation between tolerance class and indispensable attribute and knowledge dependency in rough set model with tolerance relation is explored. After giving definitions and concepts of knowledge dependency and knowledge dependency degree for incomplete information system in tolerance rough set model by distinguishing decision attribute containing missing attribute value or not, the result of maintaining reflectivity, transitivity, augmentation, decomposition law and merge law for complete knowledge dependency is proved. Knowledge dependency degrees (not complete knowledge dependency degrees) only satisfy some laws after transitivity, augmentation and decomposition operations. An algorithm to solve attribute reduction in an incomplete decision table is designed. The correctness is checked by an example.

Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Electromagnetic Tuned Mass Damper Approach for Regenerative Suspension

This study is aimed at exploring the possibility of energy recovery through the suppression of vibrations. The article describes design of electromagnetic dynamic damper. The magnetic part of the device performs the function of a tuned mass damper, thereby providing both energy regeneration and damping properties to the protected mass. According to the theory of tuned mass damper, equations of mathematical models were obtained. Then, under given properties of current system, amplitude frequency response was investigated. Therefore, main ideas and methods for further research were defined.

Definition and Core Components of the Role-Partner Allocation Problem in Collaborative Networks

In the current constantly changing economic context, collaborative networks allow partners to undertake projects that would not be possible if attempted by them individually. These projects usually involve the performance of a group of tasks (named roles) that have to be distributed among the partners. Thus, an allocation/matching problem arises that will be referred to as Role-Partner Allocation problem. In real life this situation is addressed by negotiation between partners in order to reach ad hoc agreements. Besides taking a long time and being hard work, both historical evidence and economic analysis show that such approach is not recommended. Instead, the allocation process should be automated by means of a centralized matching scheme. However, as a preliminary step to start the search for such a matching mechanism (or even the development of a new one), the problem and its core components must be specified. To this end, this paper establishes (i) the definition of the problem and its constraints, (ii) the key features of the involved elements (i.e., roles and partners); and (iii) how to create preference lists both for roles and partners. Only this way it will be possible to conduct subsequent methodological research on the solution method.     

Synthesis of Mg/B Containing Compound in a Modified Microwave Oven

Magnesium containing boron compounds with hexagonal structure have been drawn much attention due to their superconductive nature. The main target of this work is new modified microwave oven by on our own has an ability about passing through a gas in the oven medium for attainment of oxygen-free compounds such as c-BN.  Mg containing boride was synthesized by modified-microwave method under nitrogen atmosphere using amorphous boron and magnesium source in appropriate molar ratio. Microwave oven with oxygen free environment has been modified to aimed to obtain magnesium boride without oxygen. Characterizations were done by powder X-ray diffraction (XRD), and Fourier transform infrared (FTIR) spectroscopy. Mg containing boride, generally named magnesium boride, with amorphous character without oxygen is obtained via designed microwave oven system.

Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

The Effect of Reducing Superimposed Dead Load on the Lateral Seismic Deformations of Structures

The vast majority of the Middle East countries are prone to earthquakes. Despite that and from a seismic hazard point of view, the higher values of the superimposed dead load intensity of partitions and wearing materials of the constructed reinforced concrete slabs in these countries can increase the earthquake vulnerability of the structures. The primary objective of this paper is to investigate the effect of reducing superimposed dead load on the lateral seismic deformations of structures, the inter-story drifts and the seismic pounding damages. The study utilizes a group of three reinforced concrete structures at three different site conditions. These structures are assumed to be constructed in Nablus city of Palestine, and having superimposed dead load value as 1 kN/m2, 3 kN/m2, and 5 kN/m2, respectively. SAP2000 program, Version 18.1.1, is used to perform the response spectrum analysis to obtain the potential lateral seismic deformations of the studied models. Amazingly, the study points that, at the same site, superimposed dead load has a minor effect on the lateral deflections of the models. This, however, promotes the hypothesis that buildings failed during earthquakes mainly because they were not designed appropriately against gravity loads.

Relative Navigation with Laser-Based Intermittent Measurement for Formation Flying Satellites

This study presents a precise relative navigational method for satellites flying in formation using laser-based intermittent measurement data. The measurement data for the relative navigation between two satellites consist of a relative distance measured by a laser instrument and relative attitude angles measured by attitude determination. The relative navigation solutions are estimated by both the Extended Kalman filter (EKF) and unscented Kalman filter (UKF). The solutions estimated by the EKF may become inaccurate or even diverge as measurement outage time gets longer because the EKF utilizes a linearization approach. However, this study shows that the UKF with the appropriate scaling parameters provides a stable and accurate relative navigation solutions despite the long measurement outage time and large initial error as compared to the relative navigation solutions of the EKF. Various navigation results have been analyzed by adjusting the scaling parameters of the UKF.

Characterization of Penicillin V Acid and Its Related Compounds by HPLC

Background: 'Penicillin V' is a narrow, bactericidal antibiotic of the beta-lactam family of the naturally occurring penicillin group. It is limited to infections due to the germs defined as sensitive. The objective of this work was to identify and to characterize Penicillin V acid and its related compounds by High-performance liquid chromatography (HPLC). Methods: Firstly phenoxymethylpenicillin was identified by an infrared absorption. The organoleptic characteristics, pH, and determination of water content were also studied. The dosage of Penicillin V acid active substance and the determination of its related compounds were carried on waters HPLC, equipped with a UV detector at 254 nm and Discovery HS C18 column (250 mm X 4.6 mm X 5 µm) which is maintained at room temperature. The flow rate was about 1 ml per min. A mixture of water, acetonitrile and acetic acid (65:35:01) was used as mobile phase for phenoxyacetic acid ‘impurity B' and a mixture of water, acetonitrile and acetic acid (650:150:5.75) for the assay and 4-hydroxypenicillin V 'impurity D'. Results: The identification of Penicillin V acid active substance and the evaluation of its chemical quality showed conformity with USP 35th edition. The Penicillin V acid content in the raw material is equal to 1692.22 UI/mg. The percentage content of phenoxyacetic acid and 4-hydroxypenicillin V was respectively: 0.035% and 0.323%. Conclusion: Through these results, we can conclude that the Penicillin V acid active substance tested is of good physicochemical quality.

Comparison of the H-Index of Researchers of Google Scholar and Scopus

H-index has been widely used as a performance indicator of researchers around the world especially in Indonesia. The Government uses Scopus and Google scholar as indexing references in providing recognition and appreciation. However, those two indexing services yield to different H-index values. For that purpose, this paper evaluates the difference of the H-index from those services. Researchers indexed by Webometrics, are used as reference’s data in this paper. Currently, Webometrics only uses H-index from Google Scholar. This paper observed and compared corresponding researchers’ data from Scopus to get their H-index score. Subsequently, some researchers with huge differences in score are observed in more detail on their paper’s publisher. This paper shows that the H-index of researchers in Google Scholar is approximately 2.45 times of their Scopus H-Index. Most difference exists due to the existence of uncertified publishers, which is considered in Google Scholar but not in Scopus.

Power and Wear Reduction Using Composite Links of Crank-Rocker Mechanism with Optimum Transmission Angle

Reducing energy consumption became the major concern for all countries of the world during the recent decades. In general, power saving is currently the nominal goal of most industrial countries. It is well known that fossil fuels are the main pillar of development of world countries. Unfortunately, the increased rate of fossil fuel consumption will lead to serious problems caused by an expected depletion of fuels. Moreover, dangerous gases and vapors emission lead to severe environmental problems during fuel burning. Consequently, most engineering sectors especially the mechanical sectors are looking for improving any machine accompanied by reducing its energy consumption. Crank-Rocker planar mechanism is the most applied in mechanical systems. Besides, it is one of the most significant parts of the machines for obtaining the oscillatory motion. The transmission angle of this mechanism can be considered as an optimum value when its extreme values are equally varied around 90°. In addition, the transmission angle plays an important role in decreasing the required driving power and improving the dynamic properties of the mechanism. Hence, appropriate selection of mechanism links lengthens, which assures optimum transmission angle leads to decreasing the driving power. Moreover, mechanism's links manufactured from composite materials afford link's lightweight, which decreases the required driving torque. Furthermore, wear and corrosion problems can be treated through using composite links instead of using metal ones. This paper is dealing with improving the performance of crank-rocker mechanism using composite links due to their flexural elastic modulus values and stiffness in addition to high damping of composite materials.

Formulation of Mortars with Marine Sediments

The transition to a more sustainable economy is directed by a reduction in the consumption of raw materials in equivalent production. The recovery of byproducts and especially the dredged sediment as mineral addition in cements matrix represents an alternative to reduce raw material consumption and construction sector’s carbon footprint. However, the efficient use of sediment requires adequate and optimal treatment. Several processing techniques have so far been applied in order to improve some physicochemical properties. The heat treatment by calcination was effective in removing the organic fraction and activates the pozzolanic properties. In this article, the effect of the optimized heat treatment of marine sediments in the physico-mechanical and environmental properties of mortars are shown. A finding is that the optimal substitution of a portion of cement by treated sediments by calcination at 750 °C helps to maintain or improve the mechanical properties of the cement matrix in comparison with a standard reference mortar. The use of calcined sediment enhances mortar behavior in terms of mechanical strength and durability. From an environmental point of view and life cycle, mortars formulated containing treated sediments are considered inert with respect to the inert waste storage facilities reference (ISDI-France).

Shear Modulus Degradation of a Liquefiable Sand Deposit by Shaking Table Tests

Strength and deformability characteristics of a liquefiable sand deposit including the development of earthquake-induced shear stress and shear strain as well as soil softening via the progressive degradation of shear modulus were studied via shaking table experiments. To do so, a model of a liquefiable sand deposit was constructed and densely instrumented where accelerations, pressures, and displacements at different locations were continuously monitored. Furthermore, the confinement effects on the strength and deformation characteristics of the liquefiable sand deposit due to an external surcharge by placing a heavy concrete slab (i.e. the model of an actual structural rigid pavement) on the ground surface were examined. The results indicate that as the number of seismic-loading cycles increases, the sand deposit softens progressively as large shear strains take place in different sand elements. Liquefaction state is reached after the combined effects of the progressive degradation of the initial shear modulus associated with the continuous decrease in the mean principal stress, and the buildup of the excess of pore pressure takes place in the sand deposit. Finally, the confinement effects given by a concrete slab placed on the surface of the sand deposit resulted in a favorable increasing in the initial shear modulus, an increase in the mean principal stress and a decrease in the softening rate (i.e. the decreasing rate in shear modulus) of the sand, thus making the onset of liquefaction to take place at a later stage. This is, only after the sand deposit having a concrete slab experienced a higher number of seismic loading cycles liquefaction took place, in contrast to an ordinary sand deposit having no concrete slab.