Abstract: This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.
Abstract: Teaching Object-Oriented Programming (OOP) as part of a Computing-related university degree is a very difficult task; the road to ensuring that students are actually learning object oriented concepts is unclear, as students often find it difficult to understand the concept of objects and their behavior. This problem is especially obvious in advanced programming modules where Design Pattern and advanced programming features such as Multi-threading and animated GUI are introduced. Looking at the students’ performance at their final year on a university course, it was obvious that the level of students’ understanding of OOP varies to a high degree from one student to another. Students who aim at the production of Games do very well in the advanced programming module. However, the students’ assessment results of the last few years were relatively low; for example, in 2016-2017, the first quartile of marks were as low as 24.5 and the third quartile was 63.5. It is obvious that many students were not confident or competent enough in their programming skills. In this paper, the reasons behind poor performance in Advanced OOP modules are investigated, and a suggested practice for teaching OOP based on a complex case study is described and evaluated.
Abstract: The use of biometric identifiers in the field of
information security, access control to resources, authentication in
ATMs and banking among others, are of great concern because of
the safety of biometric data. In the general architecture of a biometric
system have been detected eight vulnerabilities, six of them allow
obtaining minutiae template in plain text. The main consequence
of obtaining minutia templates is the loss of biometric identifier
for life. To mitigate these vulnerabilities several models to protect
minutiae templates have been proposed. Several vulnerabilities in the
cryptographic security of these models allow to obtain biometric data
in plain text. In order to increase the cryptographic security and ease
of reversibility, a minutiae templates protection model is proposed.
The model aims to make the cryptographic protection and facilitate
the reversibility of data using two levels of security. The first level
of security is the data transformation level. In this level generates
invariant data to rotation and translation, further transformation is
irreversible. The second level of security is the evaluation level,
where the encryption key is generated and data is evaluated using a
defined evaluation function. The model is aimed at mitigating known
vulnerabilities of the proposed models, basing its security on the
impossibility of the polynomial reconstruction.
Abstract: A hydrogel from cellulose acetate cross linked with ethylenediaminetetraacetic dianhydride (HAC-EDTA) was synthesized by our research group, and submitted to characterization and biological tests. Cytocompatibility analysis was performed by confocal microscopy using human adipocyte derived stem cells (ASCs). The FTIR analysis showed characteristic bands of cellulose acetate and hydroxyl groups and the tensile tests evidence that HAC-EDTA present a Young’s modulus of 643.7 MPa. The confocal analysis revealed that there was cell growth at the surface of HAC-EDTA. After one day of culture the cells presented spherical morphology, which may be caused by stress of the sequestration of Ca2+ and Mg2+ ions at the cell medium by HAC-EDTA, as demonstrated by ICP-MS. However, after seven days and 14 days of culture, the cells present fibroblastoid morphology, phenotype expected by this cellular type. The results give efforts to indicate this new material as a potential biomaterial for tissue engineering, in the future in vivo approach.
Abstract: Spectrum underutilization has made cognitive
radio a promising technology both for current and future
telecommunications. This is due to the ability to exploit the unused
spectrum in the bands dedicated to other wireless communication
systems, and thus, increase their occupancy. The essential function,
which allows the cognitive radio device to perceive the occupancy
of the spectrum, is spectrum sensing. In this paper, the performance
of modern adaptations of the four most widely used spectrum
sensing techniques namely, energy detection (ED), cyclostationary
feature detection (CSFD), matched filter (MF) and eigenvalues-based
detection (EBD) is compared. The implementation has been
accomplished through the PlutoSDR hardware platform and the
GNU Radio software package in very low Signal-to-Noise Ratio
(SNR) conditions. The optimal detection performance of the
examined methods in a realistic implementation-oriented model is
found for the common relevant parameters (number of observed
samples, sensing time and required probability of false alarm).
Abstract: Road traffic accidents are among the principal causes of
traffic congestion, causing human losses, damages to health and the
environment, economic losses and material damages. Studies about
traditional road traffic accidents in urban zones represents very high
inversion of time and money, additionally, the result are not current.
However, nowadays in many countries, the crowdsourced GPS based
traffic and navigation apps have emerged as an important source
of information to low cost to studies of road traffic accidents and
urban congestion caused by them. In this article we identified the
zones, roads and specific time in the CDMX in which the largest
number of road traffic accidents are concentrated during 2016. We
built a database compiling information obtained from the social
network known as Waze. The methodology employed was Discovery
of knowledge in the database (KDD) for the discovery of patterns
in the accidents reports. Furthermore, using data mining techniques
with the help of Weka. The selected algorithms was the Maximization
of Expectations (EM) to obtain the number ideal of clusters for the
data and k-means as a grouping method. Finally, the results were
visualized with the Geographic Information System QGIS.
Abstract: Buckwheat (Fagopyrum esculentum Moench) is an annual crop belongs to family Poligonaceae. The cultivated buckwheat species are notable for their exceptional nutritive values. It is an important source of carbohydrates, fibre, macro, and microelements such as K, Ca, Mg, Na and Mn, Zn, Se, and Cu. It also contains rutin, flavonoids, riboflavin, pyridoxine and many amino acids which have beneficial effects on human health, including lowering both blood lipid and sugar levels. Rutin, quercetin and some other polyphenols are potent carcinogens against colon and other cancers. Buckwheat has significant nutritive value and plenty of uses. Cultivation of buckwheat in Sothern part of India is very meager. Hence, a study was planned with an objective to know the performance of buckwheat genotypes to different planting geometries and fertility levels. The field experiment was conducted at Main Agriculture Research Station, University of Agriculture Sciences, Dharwad, India, during 2017 Kharif. The experiment was laid-out in split-plot design with three replications having three planting geometries as main plots, two genotypes as sub plots and three fertility levels as sub-sub plot treatments. The soil of the experimental site was vertisol. The standard procedures are followed to record the observations. The planting geometry of 30*10 cm was recorded significantly higher seed yield (893 kg/ha⁻¹), stover yield (1507 kg ha⁻¹), clusters plant⁻¹ (7.4), seeds clusters⁻¹ (7.9) and 1000 seed weight (26.1 g) as compared to 40*10 cm and 20*10 cm planting geometries. Between the genotypes, significantly higher seed yield (943 kg ha⁻¹) and harvest index (45.1) was observed with genotype IC-79147 as compared to PRB-1 genotype (687 kg ha⁻¹ and 34.2, respectively). However, the genotype PRB-1 recorded significantly higher stover yield (1344 kg ha⁻¹) as compared to genotype IC-79147 (1173 kg ha⁻¹). The genotype IC-79147 was recorded significantly higher clusters plant⁻¹ (7.1), seeds clusters⁻¹ (7.9) and 1000 seed weight (24.5 g) as compared PRB-1 (5.4, 5.8 and 22.3 g, respectively). Among the fertility levels tried, the fertility level of 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (845 kg ha-1) and stover yield (1359 kg ha⁻¹) as compared to 40:20 NP kg ha-1 (808 and 1259 kg ha⁻¹ respectively) and 20:10 NP kg ha-1 (793 and 1144 kg ha⁻¹ respectively). Within the treatment combinations, IC 79147 genotype having 30*10 cm planting geometry with 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (1070 kg ha⁻¹), clusters plant⁻¹ (10.3), seeds clusters⁻¹ (9.9) and 1000 seed weight (27.3 g) compared to other treatment combinations.
Abstract: Computer-based optimization techniques can be employed to improve the efficiency of energy conversions processes, including reducing the aerodynamic loss in a thermal power plant turbomachine. In this paper, towards mitigating secondary flow losses, a design optimization workflow is implemented for the casing geometry of a 1.5 stage axial flow turbine that improves the turbine isentropic efficiency. The improved turbine is used in an open thermodynamic gas cycle with regeneration and cogeneration. Performance estimates are obtained by the commercial software Cycle – Tempo. Design and off design conditions are considered as well as variations in inlet air temperature. Reductions in both the natural gas specific fuel consumption and in CO2 emissions are predicted by using the gas turbine cycle fitted with the new casing design. These gains are attractive towards enhancing the competitiveness and reducing the environmental impact of thermal power plant.
Abstract: Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.
Abstract: We present a new class of numerical techniques to
solve shallow water flows over dry areas including run-up. Many
recent investigations on wave run-up in coastal areas are based on
the well-known shallow water equations. Numerical simulations have
also performed to understand the effects of several factors on tsunami
wave impact and run-up in the presence of coastal areas. In all these
simulations the shallow water equations are solved in entire domain
including dry areas and special treatments are used for numerical
solution of singularities at these dry regions. In the present study we
propose a new method to deal with these difficulties by reformulating
the shallow water equations into a new system to be solved only in the
wetted domain. The system is obtained by a change in the coordinates
leading to a set of equations in a moving domain for which the
wet/dry interface is the reconstructed using the wave speed. To solve
the new system we present a finite volume method of Lax-Friedrich
type along with a modified method of characteristics. The method is
well-balanced and accurately resolves dam-break problems over dry
areas.
Abstract: The use of waste rubber chips not only can be of great importance in terms of the environment, but also can be used to increase the shear strength of soils. The purpose of this study was to evaluate the variation of the internal friction angle of liquefiable sandy soil using waste rubber chips. For this purpose, the geotechnical properties of unmodified and modified soil samples by waste lining rubber chips have been evaluated and analyzed by performing the triaxial consolidated drained test. In order to prepare the laboratory specimens, the sandy soil in part of Rudsar shores in Gilan province, north of Iran with high liquefaction potential has been replaced by two percent of waste rubber chips. Samples have been compressed until reaching the two levels of density of 15.5 and 16.7 kN/m3. Also, in order to find the optimal length of chips in sandy soil, the rectangular rubber chips with the widths of 0.5 and 1 cm and the lengths of 0.5, 1, and 2 cm were used. The results showed that the addition of rubber chips to liquefiable sandy soil greatly increases the shear resistance of these soils. Also, it can be seen that decreasing the width and increasing the length-to-width ratio of rubber chips has a direct impact on the shear strength of the modified soil samples with rubber chips.
Abstract: Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.
Abstract: This article presents the main results of three-dimensional (3-D) numerical investigation of asphalt pavement structures behaviour using a coupled Finite Element-Mapped Infinite Element (FE-MIE) model. The validation and numerical performance of this model are assessed by confronting critical pavement responses with Burmister’s solution and FEM simulation results for multi-layered elastic structures. The coupled model is then efficiently utilised to perform 3-D simulations of a typical asphalt pavement structure in order to investigate the impact of two tire configurations (conventional dual and new generation wide-base tires) on critical pavement response parameters. The numerical results obtained show the effectiveness and the accuracy of the coupled (FE-MIE) model. In addition, the simulation results indicate that, compared with conventional dual tire assembly, single wide base tire caused slightly greater fatigue asphalt cracking and subgrade rutting potentials and can thus be utilised in view of its potential to provide numerous mechanical, economic, and environmental benefits.
Abstract: Source apportionment using Dispersion model depends primarily on the quality of Emission Inventory. In the present study, a CMB receptor model has been used to identify the sources of PM2.5, while the AERMOD dispersion model has been used to account for missing sources of PM2.5 in the Emission Inventory. A statistical approach has been developed to quantify the missing sources not considered in the Emission Inventory. The inventory of each grid was improved by adjusting emissions based on road lengths and deficit in measured and modelled concentrations. The results showed that in CMB analyses, fugitive sources - soil and road dust - contribute significantly to ambient PM2.5 pollution. As a result, AERMOD significantly underestimated the ambient air concentration at most locations. The revised Emission Inventory showed a significant improvement in AERMOD performance which is evident through statistical tests.
Abstract: Acidification is a technique used in oil reservoirs
to improve annual production, reduce the skin and increase the
pressure of an oil well while eliminating the formation damage that
occurs during the drilling process, completion and, amongst others,
to create new channels allowing the easy circulation of oil around
a producing well. This is achieved by injecting an acidizing fluid
at a relatively low pressure to prevent fracturing formation. The
treatment fluid used depends on the type and nature of the reservoir
rock traversed as well as its petrophysical properties. In order to
understand the interaction mechanisms between the treatment fluids
used for the reservoir rock acidizing, several candidate wells for
stimulation were selected in the large Hassi Messaoud deposit in
southern Algeria. The stimulation of these wells is completed using
different fluids composed mainly of HCl acid with other additives
such as corrosion inhibitors, clay stabilizers and iron controllers.
These treatment fluids are injected over two phases, namely with
clean tube (7.5% HCl) and matrix aidizing with HCl (15%). The
stimulation results obtained are variable according to the type of
rock traversed and its mineralogical composition. These results show
that there has been an increase in production flow and head pressure
respectively from 1.99 m3 / h to 3.56 m3 / h and from 13 Kgf / cm2
to 20 kgf / cm2 in the sands formation having good petrophysical
properties of (porosity = 16%) and low amount of clay (Vsh = 6%).
Abstract: In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.
Abstract: The intelligent management and optimisation of radio resource technologies will lead to a considerable improvement in the overall performance in Next Generation Networks (NGNs). Carrier Aggregation (CA) technology, also known as Spectrum Aggregation, enables more efficient use of the available spectrum by combining multiple Component Carriers (CCs) in a virtual wideband channel. LTE-A (Long Term Evolution–Advanced) CA technology can combine multiple adjacent or separate CCs in the same band or in different bands. In this way, increased data rates and dynamic load balancing can be achieved, resulting in a more reliable and efficient operation of mobile networks and the enabling of high bandwidth mobile services. In this paper, several distinct CA deployment strategies for the utilisation of spectrum bands are compared in indoor-outdoor scenarios, simulated via the recently-developed Realistic Indoor Environment Generator (RIEG). We analyse the performance of the User Equipment (UE) by integrating the average throughput, the level of fairness of radio resource allocation, and other parameters, into one summative assessment termed a Comparative Factor (CF). In addition, comparison of non-CA and CA indoor mobile networks is carried out under different load conditions: varying numbers and positions of UEs. The experimental results demonstrate that the CA technology can improve network performance, especially in the case of indoor scenarios. Additionally, we show that an increase of carrier frequency does not necessarily lead to improved CF values, due to high wall-penetration losses. The performance of users under bad-channel conditions, often located in the periphery of the cells, can be improved by intelligent CA location. Furthermore, a combination of such a deployment and effective radio resource allocation management with respect to user-fairness plays a crucial role in improving the performance of LTE-A networks.
Abstract: Missing values in real-world datasets are a common
problem. Many algorithms were developed to deal with this
problem, most of them replace the missing values with a fixed
value that was computed based on the observed values. In
our work, we used a distance function based on Bhattacharyya
distance to measure the distance between objects with missing
values. Bhattacharyya distance, which measures the similarity of
two probability distributions. The proposed distance distinguishes
between known and unknown values. Where the distance between
two known values is the Mahalanobis distance. When, on the other
hand, one of them is missing the distance is computed based on the
distribution of the known values, for the coordinate that contains
the missing value. This method was integrated with Wikaya, a
digital health company developing a platform that helps to improve
prevention of chronic diseases such as diabetes and cancer. In order
for Wikaya’s recommendation system to work distance between users
need to be measured. Since there are missing values in the collected
data, there is a need to develop a distance function distances between
incomplete users profiles. To evaluate the accuracy of the proposed
distance function in reflecting the actual similarity between different
objects, when some of them contain missing values, we integrated it
within the framework of k nearest neighbors (kNN) classifier, since
its computation is based only on the similarity between objects. To
validate this, we ran the algorithm over diabetes and breast cancer
datasets, standard benchmark datasets from the UCI repository. Our
experiments show that kNN classifier using our proposed distance
function outperforms the kNN using other existing methods.
Abstract: Cardiologists perform cardiac auscultation to detect
abnormalities in heart sounds. Since accurate auscultation is
a crucial first step in screening patients with heart diseases,
there is a need to develop computer-aided detection/diagnosis
(CAD) systems to assist cardiologists in interpreting heart sounds
and provide second opinions. In this paper different algorithms
are implemented for automated heart sound classification using
unsegmented phonocardiogram (PCG) signals. Support vector
machine (SVM), artificial neural network (ANN) and cartesian
genetic programming evolved artificial neural network (CGPANN)
without the application of any segmentation algorithm has been
explored in this study. The signals are first pre-processed to remove
any unwanted frequencies. Both time and frequency domain features
are then extracted for training the different models. The different
algorithms are tested in multiple scenarios and their strengths and
weaknesses are discussed. Results indicate that SVM outperforms
the rest with an accuracy of 73.64%.
Abstract: Inelastic deformation of the brace in Special Concentrically Braced Frame (SCBF) creates inelastic damages on gusset plate connections such as buckling at edges. In this study, to improve the seismic performance of SCBFs connections, an analytical study was undertaken. Using edge’s stiffeners is one of the main solutions of this study to improve the gusset plate connections' behavior. For this purpose, in order to examine edge’s stiffeners effect on gusset plate connections, two groups of modeling with and without considering edge’s stiffener and different types of braces were modeled using ABAQUS software. The results show that considering the edge’s stiffener reduces the equivalent plastic strain values at a connection region of gusset plate with beam and column, which can improve the seismic performance of gusset plate. Furthermore, considering the edge’s stiffeners significantly decreases the strain concentration at regions where gusset plates have been connected to beam and column. Moreover, considering 2tpl distance causes reduction in the plastic strain.