Abstract: Dye removal is an environmental concern because the textile industries have been increasing by world population and industrialization. Adsorption is the technique to find adsorbents to remove dyes from wastewater. This method is low-cost and effective for dye removal. This work tries to develop effective adsorbents using the computational approach because it will be able to predict the possibility of the adsorbents for specific dyes in terms of binding free energies. The computational approach is faster and cheaper than the experimental approach in case of finding the best adsorbents. All starting structures of dyes and adsorbents are optimized by quantum calculation. The complexes between dyes and adsorbents are generated by the docking method. The obtained binding free energies from docking are compared to binding free energies from the experimental data. The calculated energies can be ranked as same as the experimental results. In addition, this work also shows the possible orientation of the complexes. This work used two experimental groups of the complexes of the dyes and adsorbents. In the first group, there are chitosan (adsorbent) and two dyes (reactive red (RR) and direct sun yellow (DY)). In the second group, there are poly(1,2-epoxy-3-phenoxy) propane (PEPP), which is the adsorbent, and 2 dyes of bromocresol green (BCG) and alizarin yellow (AY).
Abstract: The present study aimed to evaluate the understanding of the students in Tehran universities (Iran) about the numerical representation of the average rate of change based on the Structure of Observed Learning Outcomes (SOLO). In the present descriptive-survey research, the statistical population included undergraduate students (basic sciences and engineering) in the universities of Tehran. The samples were 604 students selected by random multi-stage clustering. The measurement tool was a task whose face and content validity was confirmed by math and mathematics education professors. Using Cronbach's Alpha criterion, the reliability coefficient of the task was obtained 0.95, which verified its reliability. The collected data were analyzed by descriptive statistics and inferential statistics (chi-squared and independent t-tests) under SPSS-24 software. According to the SOLO model in the prestructural, unistructural, and multistructural levels, basic science students had a higher percentage of understanding than that of engineering students, although the outcome was inverse at the relational level. However, there was no significant difference in the average understanding of both groups. The results indicated that students failed to have a proper understanding of the numerical representation of the average rate of change, in addition to missconceptions when using physics formulas in solving the problem. In addition, multiple solutions were derived along with their dominant methods during the qualitative analysis. The current research proposed to focus on the context problems with approximate calculations and numerical representation, using software and connection common relations between math and physics in the teaching process of teachers and professors.
Abstract: Calculation of the carbon footprint of cement concrete is a complex process including consideration of the phase of primary life (components and concrete production processes, transportation, construction works, maintenance of concrete structures) and secondary life, including demolition and recycling. Taking into consideration the effect of concrete carbonation can lead to a reduction in the calculated carbon footprint of concrete. In this paper, an example of CO2 balance for small bridge elements made of Portland cement reinforced concrete was done. The results include the effect of carbonation of concrete in a structure and of concrete rubble after demolition. It was shown that important impact of carbonation on the balance is possible only when rubble carbonation is possible. It was related to the fact that only the sequestration potential in the secondary phase of concrete life has significant value.
Abstract: One of the main practical difficulties attended with tunnel construction is related to underground water. Uncontrolled water behavior may cause extra loads on the lining, mechanical instability, and unfavorable environmental problems. Estimating underground water inflow rate to the tunnels is a complex skill. The common calculation methods are: empirical methods, analytical solutions, numerical solutions based on the equivalent continuous porous media. In this research the rate of underground water inflow to the Tabriz metro first line tunnel has been investigated by numerical finite difference method using FLAC2D software. Comparing results of Heuer analytical method and numerical simulation showed good agreement with each other. Fully coupled and one-way coupled hydro mechanical states as well as water-free conditions in the soil around the tunnel are used in numerical models and these models have been applied to evaluate the loading value on the tunnel support system. Results showed that the fully coupled hydro mechanical analysis estimated more axial forces, moments and shear forces in linings, so this type of analysis is more conservative and reliable method for design of tunnel lining system. As sensitivity analysis, inflow water rates into the tunnel were evaluated in different soil permeability, underground water levels and depths of the tunnel. Result demonstrated that water level in constant depth of the tunnel is more sensitive factor for water inflow rate to the tunnel in comparison of other parameters investigated in the sensitivity analysis.
Abstract: This paper presents an approach for easy creation and
classification of institutional risk profiles supporting endangerment
analysis of file formats. The main contribution of this work is the
employment of data mining techniques to support set up of the most
important risk factors. Subsequently, risk profiles employ risk factors
classifier and associated configurations to support digital preservation
experts with a semi-automatic estimation of endangerment group
for file format risk profiles. Our goal is to make use of an expert
knowledge base, accuired through a digital preservation survey
in order to detect preservation risks for a particular institution.
Another contribution is support for visualisation of risk factors for
a requried dimension for analysis. Using the naive Bayes method,
the decision support system recommends to an expert the matching
risk profile group for the previously selected institutional risk profile.
The proposed methods improve the visibility of risk factor values
and the quality of a digital preservation process. The presented
approach is designed to facilitate decision making for the preservation
of digital content in libraries and archives using domain expert
knowledge and values of file format risk profiles. To facilitate
decision-making, the aggregated information about the risk factors
is presented as a multidimensional vector. The goal is to visualise
particular dimensions of this vector for analysis by an expert and
to define its profile group. The sample risk profile calculation and
the visualisation of some risk factor dimensions is presented in the
evaluation section.
Abstract: Building system is highly vulnerable to different kinds
of faults and human misbehaviors. Energy efficiency and user comfort
are directly targeted due to abnormalities in building operation. The
available fault diagnosis tools and methodologies particularly rely on
rules or pure model-based approaches. It is assumed that model or
rule-based test could be applied to any situation without taking into
account actual testing contexts. Contextual tests with validity domain
could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when
validate the test model considering the non-modeled events such
as occupancy, weather conditions, door and window openings and
the integration of the knowledge of the expert on the state of the
system. The concept of heterogeneous tests is combined with test
validity to generate fault diagnoses. A combination of rules, range
and model-based tests known as heterogeneous tests are proposed
to reduce the modeling complexity. Calculation of logical diagnoses
coming from artificial intelligence provides a global explanation
consistent with the test result. An application example shows the efficiency of the proposed
technique: an office setting at Grenoble Institute of Technology.
Abstract: By using partial factors of safety, uncertainties due to the inherent variability of the soil properties and loads are taken into account in the geotechnical design process. According to the reliability index concept in Eurocode-0 in conjunction with Eurocode-7 a minimum safety level of β = 3.8 for reliability class RC2 shall be established. The reliability of the system depends heavily on the choice of the prespecified safety factor and the choice of the characteristic soil properties. The safety factors stated in the standards are mainly based on experience. However, no general accepted method for the calculation of a characteristic value within the current design practice exists. In this study, a laterally loaded monopile is investigated and the influence of the chosen quantile values of the deterministic system, calculated with p-y springs, will be presented. Monopiles are the most common foundation concepts for offshore wind energy converters. Based on the calculations for non-cohesive soils, a recommendation for an appropriate quantile value for the necessary safety level according to the standards for a deterministic design is given.
Abstract: The present paper reports the cracking moment estimates of a set of steel-reinforced, Fiber Reinforced Polymer (FRP)-reinforced and hybrid steel-FRP reinforced concrete beams, calculated from different analytical formulations in the codes, together with the experimental cracking load values. A total of three steel-reinforced, four FRP-reinforced, 12 hybrid FRP-steel over-reinforced and five hybrid FRP-steel under-reinforced concrete beam tests were analyzed within the scope of the study. Glass FRP (GFRP) and Basalt FRP (BFRP) bars were used in the beams as FRP bars. In under-reinforced hybrid beams, rupture of the FRP bars preceded crushing of concrete, while concrete crushing preceded FRP rupture in over-reinforced beams. In both types, steel yielding took place long before the FRP rupture and concrete crushing. The cracking moment mainly depends on two quantities, namely the moment of inertia of the section at the initiation of cracking and the flexural tensile strength of concrete, i.e. the modulus of rupture. In the present study, two different definitions of uncracked moment of inertia, i.e. the gross and the uncracked transformed moments of inertia, were adopted. Two analytical equations for the modulus of rupture (ACI 318M and Eurocode 2) were utilized in the calculations as well as the experimental tensile strength of concrete from prismatic specimen tests. The ACI 318M modulus of rupture expression produced cracking moment estimates closer to the experimental cracking moments of FRP-reinforced and hybrid FRP-steel reinforced concrete beams when used in combination with the uncracked transformed moment of inertia, yet the Eurocode 2 modulus of rupture expression gave more accurate cracking moment estimates in steel-reinforced concrete beams. All of the analytical definitions produced analytical values considerably different from the experimental cracking load values of the solely FRP-reinforced concrete beam specimens.
Abstract: Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.
Abstract: Geometric modeling plays an important role in the
constructions and manufacturing of curve, surface and solid
modeling. Their algorithms are critically important not only in
the automobile, ship and aircraft manufacturing business, but are
also absolutely necessary in a wide variety of modern applications,
e.g., robotics, optimization, computer vision, data analytics and
visualization. The calculation and display of geometric objects
can be accomplished by these six techniques: Polynomial basis,
Recursive, Iterative, Coefficient matrix, Polar form approach and
Pyramidal algorithms. In this research, the coefficient matrix (simply
called monomial form approach) will be used to model polynomial
rectangular patches, i.e., Said-Ball, Wang-Ball, DP, Dejdumrong and
NB1 surfaces. Some examples of the monomial forms for these
surface modeling are illustrated in many aspects, e.g., construction,
derivatives, model transformation, degree elevation and degress
reduction.
Abstract: 2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.
Abstract: In this paper, the techniques to solve time dependent electromagnetic wave propagation equations based on the Finite Difference Method (FDM) are proposed by comparing the results with Finite Element Method (FEM) in 2D while discussing some special simulation examples. Here, 2D dynamical wave equations for lossy media, even with a constant source, are discussed for establishing symbolic manipulation of wave propagation problems. The main objective of this contribution is to introduce a comparative study of two suitable numerical methods and to show that both methods can be applied effectively and efficiently to all types of wave propagation problems, both linear and nonlinear cases, by using symbolic computation. However, the results show that the FDM is more appropriate for solving the nonlinear cases in the symbolic solution. Furthermore, some specific complex domain examples of the comparison of electromagnetic waves equations are considered. Calculations are performed through Mathematica software by making some useful contribution to the programme and leveraging symbolic evaluations of FEM and FDM.
Abstract: Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.
Abstract: Analysing the world banking sector, we realize that traditional risk measurement methodologies no longer reflect the actual scenario with uncertainty and leave out events that can change the dynamics of markets. Considering this, regulators and financial institutions began to search more realistic models. The aim is to include external influences and interdependencies between agents, to describe and measure the operationalization of these complex systems and their risks in a more coherent and credible way. Within this context, X-Events are more frequent than assumed and, with uncertainties and constant changes, the concept of antifragility starts to gain great prominence in comparison to others methodologies of risk management. It is very useful to analyse whether a system succumbs (fragile), resists (robust) or gets benefits (antifragile) from disorder and stress. Thus, this work proposes the creation of the Banking Antifragility Index (BAI), which is based on the calculation of a triangular fuzzy number – to "quantify" qualitative criteria linked to antifragility.
Abstract: The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.
Abstract: Many jobs in society go underground, such as mine mining, tunnel construction and subways, which are vital to the development of society. Once accidents occur in these places, the interruption of traditional wired communication is not conducive to the development of rescue work. In order to realize the positioning, early warning and command functions of underground personnel and improve rescue efficiency, it is necessary to develop and design an emergency ground communication system. It is easy to be subjected to narrowband interference when performing conventional underground communication. Spreading communication can be used for this problem. However, general spread spectrum methods such as direct spread communication are inefficient, so it is proposed to use parallel combined spread spectrum (PCSS) communication to improve efficiency. The PCSS communication not only has the anti-interference ability and the good concealment of the traditional spread spectrum system, but also has a relatively high frequency band utilization rate and a strong information transmission capability. So, this technology has been widely used in practice. This paper presents a PCSS communication model-multiple detection parallel combined spread spectrum (MDPCSS) communication system. In this paper, the principle of MDPCSS communication system is described, that is, the sequence at the transmitting end is processed in blocks and cyclically shifted to facilitate multiple detection at the receiving end. The block diagrams of the transmitter and receiver of the MDPCSS communication system are introduced. At the same time, the calculation formula of the system bit error rate (BER) is introduced, and the simulation and analysis of the BER of the system are completed. By comparing with the common parallel PCSS communication, we can draw a conclusion that it is indeed possible to reduce the BER and improve the system performance. Furthermore, the influence of different pseudo-code lengths selected on the system BER is simulated and analyzed, and the conclusion is that the larger the pseudo-code length is, the smaller the system error rate is.
Abstract: The climatic condition over Indian region is highly dependent on monsoon. India receives maximum amount of rainfall during southwest monsoon. Indian economy is highly dependent on agriculture. The presence of flood and drought years influenced the total cultivation system as well as the economy of the country as Indian agricultural systems is still highly dependent on the monsoon rainfall. The present study has been planned to investigate the flood and drought years for the north-west Himalayan region from 1951 to 2014 by using area average Indian Meteorological Department (IMD) rainfall data. For this investigation the Normalized index (NI) has been utilized to find out whether the particular year is drought or flood. The data have been extracted for the north-west Himalayan (NWH) region states namely Uttarakhand (UK), Himachal Pradesh (HP) and Jammu and Kashmir (J&K) to find out the rainy season average rainfall for each year, climatological mean and the standard deviation. After calculation it has been plotted by the diagrams (or graphs) to show the results- some of the years associated with drought years, some are flood years and rest are neutral. The flood and drought years can also relate with the large-scale phenomena El-Nino and La-Lina.
Abstract: The recognition of the objects contained in images has always presented a challenge in the field of research because of several difficulties that the researcher can envisage because of the variability of shape, position, contrast of objects, etc. In this paper, we will be interested in the recognition of objects. The classical Hough Transform (HT) presented a tool for detecting straight line segments in images. The technique of HT has been generalized (GHT) for the detection of arbitrary forms. With GHT, the forms sought are not necessarily defined analytically but rather by a particular silhouette. For more precision, we proposed to combine the results from the GHT with the results from a calculation of similarity between the histograms and the spatiograms of the images. The main purpose of our work is to use the concepts from recognition to generate sentences in Arabic that summarize the content of the image.
Abstract: Pile load tests should be applied to check the bearing capacity calculations and to determine the settlement of the pile corresponding to test load. Strain gauges can be installed into pile in order to determine the shaft resistance of the piles for every soil layer respectively. Detailed results can be obtained by means of strain gauges placed at certain levels into test piles. In the scope of this study, pile load test data obtained from two different projects are examined. Instrumented static pile load tests were applied on totally 7 test bored piles of different diameters (80 cm, 150 cm, and 200 cm) and different lengths (between 30-76 m) in two different project site. Settlement analysis of test piles is done by using some of load transfer methods and finite element method. Plaxis 3D which is a three-dimensional finite element program is also used for settlement analysis of the test piles. In this study, firstly bearing capacity of test piles are determined and compared with strain gauge data which is required for settlement analysis. Then, settlement values of the test piles are estimated by using load transfer methods developed in recent years and finite element method. The aim of this study is to show similarities and differences between the results obtained from settlement analysis methods and instrumented pile load tests.
Abstract: The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).