Abstract: Spectrum underutilization has made cognitive
radio a promising technology both for current and future
telecommunications. This is due to the ability to exploit the unused
spectrum in the bands dedicated to other wireless communication
systems, and thus, increase their occupancy. The essential function,
which allows the cognitive radio device to perceive the occupancy
of the spectrum, is spectrum sensing. In this paper, the performance
of modern adaptations of the four most widely used spectrum
sensing techniques namely, energy detection (ED), cyclostationary
feature detection (CSFD), matched filter (MF) and eigenvalues-based
detection (EBD) is compared. The implementation has been
accomplished through the PlutoSDR hardware platform and the
GNU Radio software package in very low Signal-to-Noise Ratio
(SNR) conditions. The optimal detection performance of the
examined methods in a realistic implementation-oriented model is
found for the common relevant parameters (number of observed
samples, sensing time and required probability of false alarm).
Abstract: Road traffic accidents are among the principal causes of
traffic congestion, causing human losses, damages to health and the
environment, economic losses and material damages. Studies about
traditional road traffic accidents in urban zones represents very high
inversion of time and money, additionally, the result are not current.
However, nowadays in many countries, the crowdsourced GPS based
traffic and navigation apps have emerged as an important source
of information to low cost to studies of road traffic accidents and
urban congestion caused by them. In this article we identified the
zones, roads and specific time in the CDMX in which the largest
number of road traffic accidents are concentrated during 2016. We
built a database compiling information obtained from the social
network known as Waze. The methodology employed was Discovery
of knowledge in the database (KDD) for the discovery of patterns
in the accidents reports. Furthermore, using data mining techniques
with the help of Weka. The selected algorithms was the Maximization
of Expectations (EM) to obtain the number ideal of clusters for the
data and k-means as a grouping method. Finally, the results were
visualized with the Geographic Information System QGIS.
Abstract: Buckwheat (Fagopyrum esculentum Moench) is an annual crop belongs to family Poligonaceae. The cultivated buckwheat species are notable for their exceptional nutritive values. It is an important source of carbohydrates, fibre, macro, and microelements such as K, Ca, Mg, Na and Mn, Zn, Se, and Cu. It also contains rutin, flavonoids, riboflavin, pyridoxine and many amino acids which have beneficial effects on human health, including lowering both blood lipid and sugar levels. Rutin, quercetin and some other polyphenols are potent carcinogens against colon and other cancers. Buckwheat has significant nutritive value and plenty of uses. Cultivation of buckwheat in Sothern part of India is very meager. Hence, a study was planned with an objective to know the performance of buckwheat genotypes to different planting geometries and fertility levels. The field experiment was conducted at Main Agriculture Research Station, University of Agriculture Sciences, Dharwad, India, during 2017 Kharif. The experiment was laid-out in split-plot design with three replications having three planting geometries as main plots, two genotypes as sub plots and three fertility levels as sub-sub plot treatments. The soil of the experimental site was vertisol. The standard procedures are followed to record the observations. The planting geometry of 30*10 cm was recorded significantly higher seed yield (893 kg/ha⁻¹), stover yield (1507 kg ha⁻¹), clusters plant⁻¹ (7.4), seeds clusters⁻¹ (7.9) and 1000 seed weight (26.1 g) as compared to 40*10 cm and 20*10 cm planting geometries. Between the genotypes, significantly higher seed yield (943 kg ha⁻¹) and harvest index (45.1) was observed with genotype IC-79147 as compared to PRB-1 genotype (687 kg ha⁻¹ and 34.2, respectively). However, the genotype PRB-1 recorded significantly higher stover yield (1344 kg ha⁻¹) as compared to genotype IC-79147 (1173 kg ha⁻¹). The genotype IC-79147 was recorded significantly higher clusters plant⁻¹ (7.1), seeds clusters⁻¹ (7.9) and 1000 seed weight (24.5 g) as compared PRB-1 (5.4, 5.8 and 22.3 g, respectively). Among the fertility levels tried, the fertility level of 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (845 kg ha-1) and stover yield (1359 kg ha⁻¹) as compared to 40:20 NP kg ha-1 (808 and 1259 kg ha⁻¹ respectively) and 20:10 NP kg ha-1 (793 and 1144 kg ha⁻¹ respectively). Within the treatment combinations, IC 79147 genotype having 30*10 cm planting geometry with 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (1070 kg ha⁻¹), clusters plant⁻¹ (10.3), seeds clusters⁻¹ (9.9) and 1000 seed weight (27.3 g) compared to other treatment combinations.
Abstract: Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.
Abstract: Embodied Cognition (EC) as a learning paradigm is based on the idea of an inseparable link between body, mind, and environment. In recent years, the advent of theoretical learning approaches around EC theory has resulted in a number of empirical studies exploring the implementation of the theory in education. This systematic literature overview identifies the mainstream of EC research and emphasizes on the implementation of the theory across learning environments. Based on a corpus of 43 manuscripts, published between 2013 and 2017, it sets out to describe the range of topics covered under the umbrella of EC and provides a holistic view of the field. The aim of the present review is to investigate the main issues in EC research related to the various learning contexts. Particularly, the study addresses the research methods and technologies that are utilized, and it also explores the integration of body into the learning context. An important finding from the overview is the potential of the theory in different educational environments and disciplines. However, there is a lack of an explicit pedagogical framework from an educational perspective for a successful implementation in various learning contexts.
Abstract: We present a new class of numerical techniques to
solve shallow water flows over dry areas including run-up. Many
recent investigations on wave run-up in coastal areas are based on
the well-known shallow water equations. Numerical simulations have
also performed to understand the effects of several factors on tsunami
wave impact and run-up in the presence of coastal areas. In all these
simulations the shallow water equations are solved in entire domain
including dry areas and special treatments are used for numerical
solution of singularities at these dry regions. In the present study we
propose a new method to deal with these difficulties by reformulating
the shallow water equations into a new system to be solved only in the
wetted domain. The system is obtained by a change in the coordinates
leading to a set of equations in a moving domain for which the
wet/dry interface is the reconstructed using the wave speed. To solve
the new system we present a finite volume method of Lax-Friedrich
type along with a modified method of characteristics. The method is
well-balanced and accurately resolves dam-break problems over dry
areas.
Abstract: The cardiopulmonary signal monitoring, without the
usage of contact electrodes or any type of in-body sensors, has
several applications such as sleeping monitoring and continuous
monitoring of vital signals in bedridden patients. This system has
also applications in the vehicular environment to monitor the driver,
in order to avoid any possible accident in case of cardiac failure.
Thus, the bio-radar system proposed in this paper, can measure vital
signals accurately by using the Doppler effect principle that relates
the received signal properties with the distance change between the
radar antennas and the person’s chest-wall. Once the bio-radar aim
is to monitor subjects in real-time and during long periods of time,
it is impossible to guarantee the patient immobilization, hence their
random motion will interfere in the acquired signals. In this paper,
a mathematical model of the bio-radar is presented, as well as its
simulation in MATLAB. The used algorithm for breath rate extraction
is explained and a method for DC offsets removal based in a motion
detection system is proposed. Furthermore, experimental tests were
conducted with a view to prove that the unavoidable random motion
can be used to estimate the DC offsets accurately and thus remove
them successfully.
Abstract: Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.
Abstract: Acidification is a technique used in oil reservoirs
to improve annual production, reduce the skin and increase the
pressure of an oil well while eliminating the formation damage that
occurs during the drilling process, completion and, amongst others,
to create new channels allowing the easy circulation of oil around
a producing well. This is achieved by injecting an acidizing fluid
at a relatively low pressure to prevent fracturing formation. The
treatment fluid used depends on the type and nature of the reservoir
rock traversed as well as its petrophysical properties. In order to
understand the interaction mechanisms between the treatment fluids
used for the reservoir rock acidizing, several candidate wells for
stimulation were selected in the large Hassi Messaoud deposit in
southern Algeria. The stimulation of these wells is completed using
different fluids composed mainly of HCl acid with other additives
such as corrosion inhibitors, clay stabilizers and iron controllers.
These treatment fluids are injected over two phases, namely with
clean tube (7.5% HCl) and matrix aidizing with HCl (15%). The
stimulation results obtained are variable according to the type of
rock traversed and its mineralogical composition. These results show
that there has been an increase in production flow and head pressure
respectively from 1.99 m3 / h to 3.56 m3 / h and from 13 Kgf / cm2
to 20 kgf / cm2 in the sands formation having good petrophysical
properties of (porosity = 16%) and low amount of clay (Vsh = 6%).
Abstract: The paper deals with the simulation of the crude distillation process using the Unisim Design simulator. The necessity of simulating this process is argued both by considerations related to the design of the crude distillation column, but also by considerations related to the design of advanced control systems. In order to use the Unisim Design simulator to simulate the crude distillation process, the identification of the simulators used in Romania and an analysis of the PRO/II, HYSYS, and Aspen HYSYS simulators were carried out. Analysis of the simulators for the crude distillation process has allowed the authors to elaborate the conclusions of the success of the crude modelling. A first aspect developed by the authors is the implementation of specific problems of petroleum liquid-vapors equilibrium using Unisim Design simulator. The second major element of the article is the development of the methodology and the elaboration of the simulation program for the crude distillation process, using Unisim Design resources. The obtained results validate the proposed methodology and will allow dynamic simulation of the process.
Abstract: Recent ground motion records demonstrate that the near-field earthquakes have various properties compared to far-field earthquakes. In general, most of these properties are affected by an important phenomenon called ‘forward directivity’ in near-fault earthquakes. Measuring structural damages are one of the common activities administered after an earthquake. Predicting the amount of damage caused by the earthquake as well as determining the vulnerability of the structure is extremely significant. In order to measure the amount of structural damages, instead of calculating the acceleration and velocity spectrum, it is possible to use the damage spectra of the structure. The damage spectrum is a kind of nonlinear spectrum that is drawn by setting the nonlinear parameters related to the single degree of freedom structures and its dynamic analysis under the specific record and measuring damage of any structure. In this study, the damage spectra of steel structures have been drawn. For this purpose, different kinds of concentric and eccentric braced structures with various ductility coefficients in hard and soft soil under near-field and far-field ground motion records have been considered using the Krawinkler and Zohrei damage index. The results indicate that, by increasing the structures' fundamental period, the amount of damage increases under the near-field earthquakes compared to far-field earthquakes. In addition, by increasing the structure ductility, the amount of damage based on near-field and far-field earthquakes decreases noticeably. Furthermore, in concentric braced structures, the amount of damage under the near-field earthquakes is almost two times more than the amount of damage in eccentrically braced structures especially for fundamental periods larger than 0.6 s.
Abstract: Missing values in real-world datasets are a common
problem. Many algorithms were developed to deal with this
problem, most of them replace the missing values with a fixed
value that was computed based on the observed values. In
our work, we used a distance function based on Bhattacharyya
distance to measure the distance between objects with missing
values. Bhattacharyya distance, which measures the similarity of
two probability distributions. The proposed distance distinguishes
between known and unknown values. Where the distance between
two known values is the Mahalanobis distance. When, on the other
hand, one of them is missing the distance is computed based on the
distribution of the known values, for the coordinate that contains
the missing value. This method was integrated with Wikaya, a
digital health company developing a platform that helps to improve
prevention of chronic diseases such as diabetes and cancer. In order
for Wikaya’s recommendation system to work distance between users
need to be measured. Since there are missing values in the collected
data, there is a need to develop a distance function distances between
incomplete users profiles. To evaluate the accuracy of the proposed
distance function in reflecting the actual similarity between different
objects, when some of them contain missing values, we integrated it
within the framework of k nearest neighbors (kNN) classifier, since
its computation is based only on the similarity between objects. To
validate this, we ran the algorithm over diabetes and breast cancer
datasets, standard benchmark datasets from the UCI repository. Our
experiments show that kNN classifier using our proposed distance
function outperforms the kNN using other existing methods.
Abstract: Cardiologists perform cardiac auscultation to detect
abnormalities in heart sounds. Since accurate auscultation is
a crucial first step in screening patients with heart diseases,
there is a need to develop computer-aided detection/diagnosis
(CAD) systems to assist cardiologists in interpreting heart sounds
and provide second opinions. In this paper different algorithms
are implemented for automated heart sound classification using
unsegmented phonocardiogram (PCG) signals. Support vector
machine (SVM), artificial neural network (ANN) and cartesian
genetic programming evolved artificial neural network (CGPANN)
without the application of any segmentation algorithm has been
explored in this study. The signals are first pre-processed to remove
any unwanted frequencies. Both time and frequency domain features
are then extracted for training the different models. The different
algorithms are tested in multiple scenarios and their strengths and
weaknesses are discussed. Results indicate that SVM outperforms
the rest with an accuracy of 73.64%.
Abstract: Inelastic deformation of the brace in Special Concentrically Braced Frame (SCBF) creates inelastic damages on gusset plate connections such as buckling at edges. In this study, to improve the seismic performance of SCBFs connections, an analytical study was undertaken. Using edge’s stiffeners is one of the main solutions of this study to improve the gusset plate connections' behavior. For this purpose, in order to examine edge’s stiffeners effect on gusset plate connections, two groups of modeling with and without considering edge’s stiffener and different types of braces were modeled using ABAQUS software. The results show that considering the edge’s stiffener reduces the equivalent plastic strain values at a connection region of gusset plate with beam and column, which can improve the seismic performance of gusset plate. Furthermore, considering the edge’s stiffeners significantly decreases the strain concentration at regions where gusset plates have been connected to beam and column. Moreover, considering 2tpl distance causes reduction in the plastic strain.
Abstract: The influence of a pulsatile electroosmotic flow (PEOF)
at the rate of spread, or dispersivity, for a non-reactive solute released
in a microcapillary with slippage at the boundary wall (modeled by
the Navier-slip condition) is theoretically analyzed. Based on the flow
velocity field developed under such conditions, the present study
implements an analytical scheme of scaling known as the Theory
of Homogenization, in order to obtain a mathematical expression for
the dispersivity, valid at a large time scale where the initial transients
have vanished and the solute spreads under the Taylor dispersion
influence. Our results show the dispersivity is a function of a slip
coefficient, the amplitude of the imposed electric field, the Debye
length and the angular Reynolds number, highlighting the importance
of the latter as an enhancement/detrimental factor on the dispersivity,
which allows to promote the PEOF as a strong candidate for chemical
species separation at lab-on-a-chip devices.
Abstract: The Oscillatory electroosmotic flow (OEOF) in power
law fluids through a microchannel is studied numerically. A
time-dependent external electric field (AC) is suddenly imposed
at the ends of the microchannel which induces the fluid motion.
The continuity and momentum equations in the x and y direction
for the flow field were simplified in the limit of the lubrication
approximation theory (LAT), and then solved using a numerical
scheme. The solution of the electric potential is based on the
Debye-H¨uckel approximation which suggest that the surface potential
is small,say, smaller than 0.025V and for a symmetric (z : z)
electrolyte. Our results suggest that the velocity profiles across
the channel-width are controlled by the following dimensionless
parameters: the angular Reynolds number, Reω, the electrokinetic
parameter, ¯κ, defined as the ratio of the characteristic length scale
to the Debye length, the parameter λ which represents the ratio
of the Helmholtz-Smoluchowski velocity to the characteristic length
scale and the flow behavior index, n. Also, the results reveal that
the velocity profiles become more and more non-uniform across the
channel-width as the Reω and ¯κ are increased, so oscillatory OEOF
can be really useful in micro-fluidic devices such as micro-mixers.
Abstract: Ignatian Discernment Process (IDP) is an intense decision-making tool to decide on life-issues. Decisions are influenced by various factors outside of the decision maker and inclination within. This paper develops IDP in the context of Fuzzy Multi-criteria Decision Making (FMCDM) process. Extended VIKOR method is a decision-making method which encompasses even conflict situations and accommodates weightage to various issues. Various aspects of IDP, namely three ways of decision making and tactics of inner desires, are observed, analyzed and articulated within the frame work of fuzzy rules. The decision-making situations are broadly categorized into two types. The issues outside of the decision maker influence the person. The inner feeling also plays vital role in coming to a conclusion. IDP integrates both the categories using Extended VIKOR method. Case studies are carried out and analyzed with FMCDM process. Finally, IDP is verified with an illustrative case study and results are interpreted. A confused person who could not come to a conclusion is able to take decision on a concrete way of life through IDP. The proposed IDP model recommends an integrated and committed approach to value-based decision making.
Abstract: Shape memory alloys (SMA) are often implemented in smart structures as the active components. Their ability to recover large displacements has been used in many applications, including structural stability/response enhancement and active structural acoustic control. SMA wires or fibers can be embedded with composite cylinders to increase their critical buckling load, improve their load-deflection behavior, and reduce the radial deflections under various thermo-mechanical loadings. This paper presents a semi-analytical investigation on the non-linear load-deflection response of SMA-reinforced composite circular cylindrical shells. The cylinder shells are under uniform external pressure load. Based on first-order shear deformation shell theory (FSDT), the equilibrium equations of the structure are derived. One-dimensional simplified Brinson’s model is used for determining the SMA recovery force due to its simplicity and accuracy. Airy stress function and Galerkin technique are used to obtain non-linear load-deflection curves. The results are verified by comparing them with those in the literature. Several parametric studies are conducted in order to investigate the effect of SMA volume fraction, SMA pre-strain value, and SMA activation temperature on the response of the structure. It is shown that suitable usage of SMA wires results in a considerable enhancement in the load-deflection response of the shell due to the generation of the SMA tensile recovery force.
Abstract: The present research has been performed to investigate the effect of base course application on load-settlement characteristics of sandy subgrade using plate load test. The main parameter investigated in this study was the subgrade reaction coefficient. The model tests were conducted in a 1.35 m long, 1 m wide, and 1 m deep steel test box of Imam Khomeini International University (IKIU Calibration Chamber). The base courses used in this research were in three different thicknesses of 15 cm, 20 cm, and 30 cm. The test results indicated that in the case of using base course over loose sandy subgrade, the values of subgrade reaction coefficient can be increased from 7 to 132 , 224 , and 396 in presence of 15 cm, 20 cm, and 30 cm base course, respectively.
Abstract: Pervious concrete combines considerable permeability with adequate strength, which makes it very beneficial in pavement construction and also in ground improvement projects. In this paper, a single pervious concrete pile subjected to vertical and lateral loading is analysed using a verified three dimensional finite element code. A parametric study was carried out in order to investigate load bearing capacity of a single unreinforced pervious concrete pile in saturated soft soil and also gain insight into the failure mechanism of this rather new soil improvement technique. The results show that concrete damaged plasticity constitutive model can perfectly simulate the highly brittle nature of the pervious concrete material and considering the computed vertical and horizontal load bearing capacities, some suggestions have been made for ground improvement projects.