Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing

The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.

Influence of Infrared Radiation on the Growth Rate of Microalgae Chlorella sorokiniana

Nowadays, the progressive decrease of primary natural resources and ongoing upward trend in terms of energy demand, have resulted in development of new generation technological processes which are focused on step-wise production and residues utilization. Thus, microalgae-based 3rd generation bioeconomy is considered one of the most promising approaches that allow production of value-added products and sophisticated utilization of residues biomass. In comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, and thus, addressing issues associated with negative social and environmental impacts. However, one of the most challenging tasks is to undergo seasonal variations and to achieve optimal growing conditions for indoor closed systems that can cover further demand for material and energetic utilization of microalgae. For instance, outdoor cultivation in St. Petersburg (Russia) is only suitable within rather narrow time frame (from mid-May to mid-September). At earlier and later periods, insufficient sunlight and heat for the growth of microalgae were detected. On the other hand, without additional physical effects, the biomass increment in summer is 3-5 times per week, depending on the solar radiation and the ambient temperature. In order to increase biomass production, scientists from all over the world have proposed various technical solutions for cultivators and have been studying the influence of various physical factors affecting biomass growth namely: magnetic field, radiation impact, and electric field, etc. In this paper, the influence of infrared radiation (IR) and fluorescent light on the growth rate of microalgae Chlorella sorokiniana has been studied. The cultivation of Chlorella sorokiniana was carried out in 500 ml cylindrical glass vessels, which were constantly aerated. To accelerate the cultivation process, the mixture was stirred for 15 minutes at 500 rpm following 120 minutes of rest time. At the same time, the metabolic needs in nutrients were provided by the addition of micro- and macro-nutrients in the microalgae growing medium. Lighting was provided by fluorescent lamps with the intensity of 2500 ± 300 lx. The influence of IR was determined using IR lamps with a voltage of 220 V, power of 250 W, in order to achieve the intensity of 13 600 ± 500 lx. The obtained results show that under the influence of fluorescent lamps along with the combined effect of active aeration and variable mixing, the biomass increment on the 2nd day was three times, and on the 7th day, it was eight-fold. The growth rate of microalgae under the influence of IR radiation was lower and has reached 22.6·106 cells·mL-1. However, application of IR lamps for the biomass growth allows maintaining the optimal temperature of microalgae suspension at approximately 25-28°C, which might especially be beneficial during the cold season in extreme climate zones.

Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana

Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.

Use and Relationship of Shell Nouns as Cohesive Devices in the Quality of Second Language Writing

The current study is a comparative analysis of the use of shell nouns as a cohesive device (CD) in an English for Second Language (ESL) setting in order to identify their use and relationship in the quality of second language (L2) writing. As these nouns were established to anticipate the meaning within, across or outside the text, their use has fascinated writing researchers. The corpus of the study included published articles from reputable journals and graduate students’ papers in order to analyze the frequency of shell nouns using “highly prevalent” nouns in the academic community, to identify the different lexicogrammatical patterns where these nouns occur and to the functions connected with these patterns. The result of the study implies that published authors used more shell nouns in their paper than graduate students. However, the functions of the different lexicogrammatical patterns for the frequently occurring shell nouns are somewhat similar. These results could help students in enhancing the cohesion of their text and in comprehending it.

3D Numerical Investigation of Asphalt Pavements Behaviour Using Infinite Elements

This article presents the main results of three-dimensional (3-D) numerical investigation of asphalt pavement structures behaviour using a coupled Finite Element-Mapped Infinite Element (FE-MIE) model. The validation and numerical performance of this model are assessed by confronting critical pavement responses with Burmister’s solution and FEM simulation results for multi-layered elastic structures. The coupled model is then efficiently utilised to perform 3-D simulations of a typical asphalt pavement structure in order to investigate the impact of two tire configurations (conventional dual and new generation wide-base tires) on critical pavement response parameters. The numerical results obtained show the effectiveness and the accuracy of the coupled (FE-MIE) model. In addition, the simulation results indicate that, compared with conventional dual tire assembly, single wide base tire caused slightly greater fatigue asphalt cracking and subgrade rutting potentials and can thus be utilised in view of its potential to provide numerous mechanical, economic, and environmental benefits.

Reading and Teaching Poetry as Communicative Discourse: A Pragma-Linguistic Approach

Language is communication on several discourse levels. The target of teaching a language and the literature of a foreign language is to communicate a message. Reading, appreciating, analysing, and interpreting poetry as a sophisticated rhetorical expression of human thoughts, emotions, and philosophical messages is more feasible through the use of linguistic pragmatic tools from a communicative discourse perspective. The poet's intention, speech act, illocutionary act, and perlocutionary goal can be better understood when communicative situational context as well as linguistic discourse structure theories are employed. The use of linguistic theories in the teaching of poetry is, therefore, intrinsic to students' comprehension, interpretation, and appreciation of poetry of the different ages. It is the purpose of this study to show how both teachers as well as students can apply these linguistic theories and tools to dramatic poetic texts for an engaging, enlightening, and effective interpretation and appreciation of the language. Theories drawn from areas of pragmatics, discourse analysis, embedded discourse level, communicative situational context, and other linguistic approaches were applied to selected poetry texts from the different centuries. Further, in a simple statistical count of the number of poems with dialogic dramatic discourse with embedded two or three levels of discourse in different anthologies outweighs the number of descriptive poems with a one level of discourse, between the poet and the reader. Poetry is thus discourse on one, two, or three levels. It is, therefore, recommended that teachers and students in the area of ESL/EFL use the linguistics theories for a better understanding of poetry as communicative discourse. The practice of applying these linguistic theories in classrooms and in research will allow them to perceive the language and its linguistic, social, and cultural aspect. Texts will become live illocutionary acts with a perlocutionary acts goal rather than mere literary texts in anthologies.

Assessing the Antimicrobial Activity of Chitosan Nanoparticles by Fluorescence-Labeling

Chitosan is a natural polysaccharide prepared by the N-deacetylation of chitin. In this study, the physicochemical and antibacterial properties of chitosan nanoparticles, produced by ultrasound irradiation, were evaluated. The physicochemical properties of the nanoparticles were determined by dynamic light scattering and zeta potential analysis. Chitosan nanoparticles inhibited the growth of E. coli. The minimum inhibitory concentration (MIC) values were lower than 0.5 mg/mL, and the minimum bactericidal concentration (MBC) values were similar or higher than MIC values. Confocal laser scanning micrographs (CLSM) were used to observe the interaction between E. coli suspensions mixed with FITC-labeled chitosan polymers and nanoparticles.

A Design for Customer Preferences Model by Cluster Analysis of Geometric Features and Customer Preferences

In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.

A 15 Minute-Based Approach for Berth Allocation and Quay Crane Assignment

In traditional integrated berth allocation with quay crane assignment models, time dimension is usually assumed in hourly based. However, nowadays, transshipment becomes the main business to many container terminals, especially in Southeast Asia (e.g. Hong Kong and Singapore). In these terminals, vessel arrivals are usually very frequent with small handling volume and very short staying time. Therefore, the traditional hourly-based modeling approach may cause significant berth and quay crane idling, and consequently cannot meet their practical needs. In this connection, a 15-minute-based modeling approach is requested by industrial practitioners. Accordingly, a Three-level Genetic Algorithm (3LGA) with Quay Crane (QC) shifting heuristics is designed to fulfill the research gap. The objective function here is to minimize the total service time. Preliminary numerical results show that the proposed 15-minute-based approach can reduce the berth and QC idling significantly.

Estimation of PM2.5 Emissions and Source Apportionment Using Receptor and Dispersion Models

Source apportionment using Dispersion model depends primarily on the quality of Emission Inventory. In the present study, a CMB receptor model has been used to identify the sources of PM2.5, while the AERMOD dispersion model has been used to account for missing sources of PM2.5 in the Emission Inventory. A statistical approach has been developed to quantify the missing sources not considered in the Emission Inventory. The inventory of each grid was improved by adjusting emissions based on road lengths and deficit in measured and modelled concentrations. The results showed that in CMB analyses, fugitive sources - soil and road dust - contribute significantly to ambient PM2.5 pollution. As a result, AERMOD significantly underestimated the ambient air concentration at most locations. The revised Emission Inventory showed a significant improvement in AERMOD performance which is evident through statistical tests.

An Observer-Based Direct Adaptive Fuzzy Sliding Control with Adjustable Membership Functions

In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.

Performance Assessment of Carrier Aggregation-Based Indoor Mobile Networks

The intelligent management and optimisation of radio resource technologies will lead to a considerable improvement in the overall performance in Next Generation Networks (NGNs). Carrier Aggregation (CA) technology, also known as Spectrum Aggregation, enables more efficient use of the available spectrum by combining multiple Component Carriers (CCs) in a virtual wideband channel. LTE-A (Long Term Evolution–Advanced) CA technology can combine multiple adjacent or separate CCs in the same band or in different bands. In this way, increased data rates and dynamic load balancing can be achieved, resulting in a more reliable and efficient operation of mobile networks and the enabling of high bandwidth mobile services. In this paper, several distinct CA deployment strategies for the utilisation of spectrum bands are compared in indoor-outdoor scenarios, simulated via the recently-developed Realistic Indoor Environment Generator (RIEG). We analyse the performance of the User Equipment (UE) by integrating the average throughput, the level of fairness of radio resource allocation, and other parameters, into one summative assessment termed a Comparative Factor (CF). In addition, comparison of non-CA and CA indoor mobile networks is carried out under different load conditions: varying numbers and positions of UEs. The experimental results demonstrate that the CA technology can improve network performance, especially in the case of indoor scenarios. Additionally, we show that an increase of carrier frequency does not necessarily lead to improved CF values, due to high wall-penetration losses. The performance of users under bad-channel conditions, often located in the periphery of the cells, can be improved by intelligent CA location. Furthermore, a combination of such a deployment and effective radio resource allocation management with respect to user-fairness plays a crucial role in improving the performance of LTE-A networks.

Crude Distillation Process Simulation Using Unisim Design Simulator

The paper deals with the simulation of the crude distillation process using the Unisim Design simulator. The necessity of simulating this process is argued both by considerations related to the design of the crude distillation column, but also by considerations related to the design of advanced control systems. In order to use the Unisim Design simulator to simulate the crude distillation process, the identification of the simulators used in Romania and an analysis of the PRO/II, HYSYS, and Aspen HYSYS simulators were carried out. Analysis of the simulators for the crude distillation process has allowed the authors to elaborate the conclusions of the success of the crude modelling. A first aspect developed by the authors is the implementation of specific problems of petroleum liquid-vapors equilibrium using Unisim Design simulator. The second major element of the article is the development of the methodology and the elaboration of the simulation program for the crude distillation process, using Unisim Design resources. The obtained results validate the proposed methodology and will allow dynamic simulation of the process.  

Dietary Habit and Anthropometric Status in Hypertensive Patients Compared to Normotensive Participants in the North of Iran

Hypertension is one of the important reasons of morbidity and mortality in countries, including Iran. It has been shown that hypertension is a consequence of the interaction of genetics and environment. Nutrients have important roles in the controlling of blood pressure. We assessed dietary habit and anthropometric status in patients with hypertension in the north of Iran, and that have special dietary habit and according to their culture. This study was conducted on 127 patients with newly recognized hypertension and the 120 normotensive participants. Anthropometric status was measured and demographic characteristics, and medical condition were collected by valid questionnaires and dietary habit assessment was assessed with 3-day food recall (two weekdays and one weekend). The mean age of participants was 58 ± 6.7 years. The mean level of energy intake, saturated fat, vitamin D, potassium, zinc, dietary fiber, vitamin C, calcium, phosphorus, copper and magnesium was significantly lower in the hypertensive group compared to the control (p < 0.05). After adjusting for energy intake, positive association was observe between hypertension and some dietary nutrients including; Cholesterol [OR: 1.1, P: 0.001, B: 0.06], fiber [OR: 1.6, P: 0.001, B: 1.8], vitamin D [OR: 2.6, P: 0.006, B: 0.9] and zinc [OR: 1.4, P: 0.006, B: 0.3] intake. Logistic regression analysis showed that there was not significant association between hypertension, weight and waist circumference. In our study, the mean intake of some nutrients was lower in the hypertensive individuals compared to the normotensive individual. Health training about suitable dietary habits and easier access to vitamin D supplementation in patients with hypertension are cost-effective tools to improve outcomes in Iran.

Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features

Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.

Influence of a Pulsatile Electroosmotic Flow on the Dispersivity of a Non-Reactive Solute through a Microcapillary

The influence of a pulsatile electroosmotic flow (PEOF) at the rate of spread, or dispersivity, for a non-reactive solute released in a microcapillary with slippage at the boundary wall (modeled by the Navier-slip condition) is theoretically analyzed. Based on the flow velocity field developed under such conditions, the present study implements an analytical scheme of scaling known as the Theory of Homogenization, in order to obtain a mathematical expression for the dispersivity, valid at a large time scale where the initial transients have vanished and the solute spreads under the Taylor dispersion influence. Our results show the dispersivity is a function of a slip coefficient, the amplitude of the imposed electric field, the Debye length and the angular Reynolds number, highlighting the importance of the latter as an enhancement/detrimental factor on the dispersivity, which allows to promote the PEOF as a strong candidate for chemical species separation at lab-on-a-chip devices.

Oscillatory Electroosmotic Flow of Power-Law Fluids in a Microchannel

The Oscillatory electroosmotic flow (OEOF) in power law fluids through a microchannel is studied numerically. A time-dependent external electric field (AC) is suddenly imposed at the ends of the microchannel which induces the fluid motion. The continuity and momentum equations in the x and y direction for the flow field were simplified in the limit of the lubrication approximation theory (LAT), and then solved using a numerical scheme. The solution of the electric potential is based on the Debye-H¨uckel approximation which suggest that the surface potential is small,say, smaller than 0.025V and for a symmetric (z : z) electrolyte. Our results suggest that the velocity profiles across the channel-width are controlled by the following dimensionless parameters: the angular Reynolds number, Reω, the electrokinetic parameter, ¯κ, defined as the ratio of the characteristic length scale to the Debye length, the parameter λ which represents the ratio of the Helmholtz-Smoluchowski velocity to the characteristic length scale and the flow behavior index, n. Also, the results reveal that the velocity profiles become more and more non-uniform across the channel-width as the Reω and ¯κ are increased, so oscillatory OEOF can be really useful in micro-fluidic devices such as micro-mixers.

Programming Language Extension Using Structured Query Language for Database Access

Relational databases constitute a very vital tool for the effective management and administration of both personal and organizational data. Data access ranges from a single user database management software to a more complex distributed server system. This paper intends to appraise the use a programming language extension like structured query language (SQL) to establish links to a relational database (Microsoft Access 2013) using Visual C++ 9 programming language environment. The methodology used involves the creation of tables to form a database using Microsoft Access 2013, which is Object Linking and Embedding (OLE) database compliant. The SQL command is used to query the tables in the database for easy extraction of expected records inside the visual C++ environment. The findings of this paper reveal that records can easily be accessed and manipulated to filter exactly what the user wants, such as retrieval of records with specified criteria, updating of records, and deletion of part or the whole records in a table.

The Role of Home Composting in Waste Management Cost Reduction

Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.

Case Study of the Roma Tomato Distribution Chain: A Dynamic Interface for an Agricultural Enterprise in Mexico

From August to December of 2016, a diagnostic and strategic planning study was carried out on the supply chain of the company Agropecuaria GABO S.A. de C.V. The final product of the study was the development of the strategic plan and a project portfolio to meet the demands of the three links in the supply chain of the Roma tomato exported annually to the United States of America. In this project, the strategic objective of ensuring the proper handling of the product was selected and one of the goals associated with this was the employment of quantitative methods to support decision making. Considering the antecedents, the objective of this case study was to develop a model to analyze the behavioral dynamics in the distribution chain, from the logistics of storage and shipment of Roma tomato in 81-case pallets (11.5 kg per case), to the two pre-cooling rooms and eventual loading onto transports, seeking to reduce the bottleneck and the associated costs by means of a dynamic interface. The methodology used was that of system dynamics, considering four phases that were adapted to the purpose of the study: 1) the conceptualization phase; 2) the formulation phase; 3) the evaluation phase; and 4) the communication phase. The main practical conclusions lead to the possibility of reducing both the bottlenecks in the cooling rooms and the costs by simulating scenarios and modifying certain policies. Furthermore, the creation of the dynamic interface between the model and the stakeholders was achieved by generating interaction with buttons and simple instructions that allow making modifications and observing diverse behaviors.