Forensic Medical Capacities of Research of Saliva Stains on Physical Evidence after Washing

Recent advances in genetics have allowed increasing acutely the capacities of the formation of reliable evidence in conducting forensic examinations. Thus, traces of biological origin are important sources of information about a crime. Currently, around the world, sexual offenses have increased, and among them are those in which the criminals use various detergents to remove traces of their crime. A feature of modern synthetic detergents is the presence of biological additives - enzymes. Enzymes purposefully destroy stains of biological origin. To study the nature and extent of the impact of modern washing powders on saliva stains on the physical evidence, specially prepared test specimens of different types of tissues to which saliva was applied have been examined. Materials and Methods: Washing machines of famous manufacturers of household appliances have been used with different production characteristics and advertised brands of washing powder for test washing. Over 3,500 experimental samples were tested. After washing, the traces of saliva were identified using modern research methods of forensic medicine. Results: The influence was tested and the dependence of the use of different washing programs, types of washing machines and washing powders in the process of establishing saliva trace and identify of the stains on the physical evidence while washing was revealed. The results of experimental and practical expert studies have shown that in most cases it is not possible to draw the conclusions in the identification of saliva traces on physical evidence after washing. This is a consequence of the effect of biological additives and other additional factors on traces of saliva during washing. Conclusions: On the basis of the results of the study, the feasibility of saliva traces of the stains on physical evidence after washing is established. The use of modern molecular genetic methods makes it possible to partially solve the problems arising in the study of unlaundered evidence. Additional study of physical evidence after washing facilitates detection and investigation of sexual offenses against women and children.

Morphological Analysis of English L1-Persian L2 Adult Learners’ Interlanguage: From the Perspective of SLA Variation

Studies on interlanguage have long been engaged in describing the phenomenon of variation in SLA. Pursuing the same goal and particularly addressing the role of linguistic features, this study describes the use of Persian morphology in the interlanguage of two adult English-speaking learners of Persian L2. Taking the general approach of a combination of contrastive analysis, error analysis and interlanguage analysis, this study focuses on the identification and prediction of some possible instances of transfer from English L1 to Persian L2 across six elicitation tasks aiming to investigate whether any of contextual features may variably influence the learners’ order of morpheme accuracy in the areas of copula, possessives, articles, demonstratives, plural form, personal pronouns, and genitive cases.  Results describe the existence of task variation in the interlanguage system of Persian L2 learners.

High Sensitivity Crack Detection and Locating with Optimized Spatial Wavelet Analysis

In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries.

Lean Models Classification: Towards a Holistic View

The purpose of this paper is to present a classification of Lean models which aims to capture all the concepts related to this approach and thus facilitate its implementation. This classification allows the identification of the most relevant models according to several dimensions. From this perspective, we present a review and an analysis of Lean models literature and we propose dimensions for the classification of the current proposals while respecting among others the axes of the Lean approach, the maturity of the models as well as their application domains. This classification allowed us to conclude that researchers essentially consider the Lean approach as a toolbox also they design their models to solve problems related to a specific environment. Since Lean approach is no longer intended only for the automotive sector where it was invented, but to all fields (IT, Hospital, ...), we consider that this approach requires a generic model that is capable of being implemented in all areas.

The Role of Paper in the Copy Identification of Safavid Era Shahnamehs of Tabriz Doctrine

To investigate and explain the history of each copy, we must refer to its past because it highlights parts of the civilization of people among which this copy has been codified. In this paper, eight Ferdowsi’s Shahnameh of Safavid era of Tabriz doctrine available in Iranian libraries and museums are studied. Undoubtedly, it can be said that Ferdowsi’s Shahnameh is one of the most important books that has been transcribed many times in different eras because it explains the Iranian champions’ prowess and it includes the history of Iran from Pishdadian to Sasanian dynasty. In addition, it has been attractive for governors and artists. The research methodology of this article is based on the analytical-descriptive arguments. The research hypothesis is based on papers used in Shahnameh writing in Safavid era of Tabriz doctrine were mostly Isfahanian papers existed. At that time, Isfahanian paper was unique in terms of quality, clarity, flatness of the sheets, volume, shape, softness and elegance, strength, and smoothness. This paper was mostly used to prepare the courtier and exquisite copies. This shows that the prepared copies in Safavid era of Tabriz doctrine were very important because the artists and people who ordered and were out of the court have ordered Isfahanian paper for writing their books.

VISMA: A Method for System Analysis in Early Lifecycle Phases

The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.

Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Fungi Associated with Decline of Kikar (Acacia nilotica) and Red River Gum (Eucalyptus camaldulensis) in Faisalabad

During this research, a comprehensive survey of tree growing areas of Faisalabad district of Pakistan was conducted to observe the symptoms, spectrum, occurrence and severity of A. nilotica and E. camaldulensis decline. Objective of current research was to investigate specific fungal pathogens involved in decline of A. nilotica and E. camaldulensis. For this purpose, infected roots, bark, neck portion, stem, branches, leaves and infected soils were collected to identify associated fungi. Potato dextrose agar (PDA) and Czepak dox agar media were used for isolations. Identification of isolated fungi was done microscopically and different fungi were identified. During survey of urban locations of Faisalabad, disease incidence on Kikar and Eucalyptus was recorded as 3.9-7.9% and 2.6-7.1% respectively. Survey of Agroforest zones of Faisalabad revealed decline incidence on kikar 7.5% from Sargodha road while on Satiana and Jhang road it was not planted. In eucalyptus trees, 4%, 8% and 0% disease incidence was observed on Jhang road, Sargodha road and Satiana road respectively. The maximum fungus isolated from the kikar tree was Drechslera australiensis (5.00%) from the stem part. Aspergillus flavus also gave the maximum value of (3.05%) from the bark. Alternaria alternata gave the maximum value of (2.05%) from leaves. Rhizopus and Mucor spp. were recorded minimum as compared to the Drechslera, Alternaria and Aspergillus. The maximum fungus isolated from the Eucalyptus tree was Armillaria luteobubalina (5.00%) from the stem part. The other fungi isolated were Macrophamina phaseolina and A. niger.

Adverse Reactions from Contrast Media in Patients Undergone Computed Tomography at the Department of Radiology, Srinagarind Hospital

Background: The incidence of adverse reactions to iodinated contrast media has risen. The dearth of reports on reactions to the administration of iso- and low-osmolar contrast media should be addressed. We, therefore, studied the profile of adverse reactions to iodinated contrast media; viz., (a) the body systems affected (b) causality, (c) severity, and (d) preventability. Objective: To study adverse reactions (causes and severity) to iodinated contrast media at Srinagarind Hospital. Method: Between March and July, 2015, 1,101 patients from the Department of Radiology were observed and interviewed for the occurrence of adverse reactions. The patients were classified per Naranjo’s algorithm and through use of an adverse reactions questionnaire. Results: A total of 105 cases (9.5%) reported adverse reactions (57% male; 43% female); among whom 2% were iso-osmolar vs. 98% low-osmolar. Diagnoses included hepatoma and cholangiocarcinoma (24.8%), colorectal cancer (9.5%), breast cancer (5.7%), cervical cancer (3.8%), lung cancer (2.9%), bone cancer (1.9%), and others (51.5%). Underlying diseases included hypertension and diabetes mellitus type 2. Mild, moderate, and severe adverse reactions accounted for 92, 5 and 3%, respectively. The respective groups of escalating symptoms included (a) mild urticaria, itching, rash, nausea, vomiting, dizziness, and headache; (b) moderate hypertension, hypotension, dyspnea, tachycardia and bronchospasm; and (c) severe laryngeal edema, profound hypotension, and convulsions. All reactions could be anticipated per Naranjo’s algorithm. Conclusion: Mild to moderate adverse reactions to low-osmolar contrast media were most common and these occurred immediately after administration. For patient safety and better outcomes, improving the identification of patients likely to have an adverse reaction is essential.

Design and Application of NFC-Based Identity and Access Management in Cloud Services

In response to a changing world and the fast growth of the Internet, more and more enterprises are replacing web-based services with cloud-based ones. Multi-tenancy technology is becoming more important especially with Software as a Service (SaaS). This in turn leads to a greater focus on the application of Identity and Access Management (IAM). Conventional Near-Field Communication (NFC) based verification relies on a computer browser and a card reader to access an NFC tag. This type of verification does not support mobile device login and user-based access management functions. This study designs an NFC-based third-party cloud identity and access management scheme (NFC-IAM) addressing this shortcoming. Data from simulation tests analyzed with Key Performance Indicators (KPIs) suggest that the NFC-IAM not only takes less time in identity identification but also cuts time by 80% in terms of two-factor authentication and improves verification accuracy to 99.9% or better. In functional performance analyses, NFC-IAM performed better in salability and portability. The NFC-IAM App (Application Software) and back-end system to be developed and deployed in mobile device are to support IAM features and also offers users a more user-friendly experience and stronger security protection. In the future, our NFC-IAM can be employed to different environments including identification for mobile payment systems, permission management for remote equipment monitoring, among other applications.

Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method

Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.

Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization

Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.

Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures

A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.

Quantification of E-Waste: A Case Study in Federal University of Espírito Santo, Brazil

The segregation of waste of electrical and electronic equipment (WEEE) in the generating source, its characterization (quali-quantitative) and identification of origin, besides being integral parts of classification reports, are crucial steps to the success of its integrated management. The aim of this paper was to count WEEE generation at the Federal University of Espírito Santo (UFES), Brazil, as well as to define sources, temporary storage sites, main transportations routes and destinations, the most generated WEEE and its recycling potential. Quantification of WEEE generated at the University in the years between 2010 and 2015 was performed using data analysis provided by UFES’s sector of assets management. EEE and WEEE flow in the campuses information were obtained through questionnaires applied to the University workers. It was recorded 6028 WEEEs units of data processing equipment disposed by the university between 2010 and 2015. Among these waste, the most generated were CRT screens, desktops, keyboards and printers. Furthermore, it was observed that these WEEEs are temporarily stored in inappropriate places at the University campuses. In general, these WEEE units are donated to NGOs of the city, or sold through auctions (2010 and 2013). As for recycling potential, from the primary processing and further sale of printed circuit boards (PCB) from the computers, the amount collected could reach U$ 27,839.23. The results highlight the importance of a WEEE management policy at the University.

Fuzzy Inference System for Determining Collision Risk of Ship in Madura Strait Using Automatic Identification System

Madura Strait is considered as one of the busiest shipping channels in Indonesia. High vessel traffic density in Madura Strait gives serious threat due to navigational safety in this area, i.e. ship collision. This study is necessary as an attempt to enhance the safety of marine traffic. Fuzzy inference system (FIS) is proposed to calculate risk collision of ships. Collision risk is evaluated based on ship domain, Distance to Closest Point of Approach (DCPA), and Time to Closest Point of Approach (TCPA). Data were collected by utilizing Automatic Identification System (AIS). This study considers several ships’ domain models to give the characteristic of marine traffic in the waterways. Each encounter in the ship domain is analyzed to obtain the level of collision risk. Risk level of ships, as the result in this study, can be used as guidance to avoid the accident, providing brief description about safety traffic in Madura Strait and improving the navigational safety in the area.

Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea

Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.

Modeling and System Identification of a Variable Excited Linear Direct Drive

Linear actuators are deployed in a wide range of applications. This paper presents the modeling and system identification of a variable excited linear direct drive (LDD). The LDD is designed based on linear hybrid stepper technology exhibiting the characteristic tooth structure of mover and stator. A three-phase topology provides the thrust force caused by alternating strengthening and weakening of the flux of the legs. To achieve best possible synchronous operation, the phases are commutated sinusoidal. Despite the fact that these LDDs provide high dynamics and drive forces, noise emission limits their operation in calm workspaces. To overcome this drawback an additional excitation of the magnetic circuit is introduced to LDD using additional enabling coils instead of permanent magnets. The new degree of freedom can be used to reduce force variations and related noise by varying the excitation flux that is usually generated by permanent magnets. Hence, an identified simulation model is necessary to analyze the effects of this modification. Especially the force variations must be modeled well in order to reduce them sufficiently. The model can be divided into three parts: the current dynamics, the mechanics and the force functions. These subsystems are described with differential equations or nonlinear analytic functions, respectively. Ordinary nonlinear differential equations are derived and transformed into state space representation. Experiments have been carried out on a test rig to identify the system parameters of the complete model. Static and dynamic simulation based optimizations are utilized for identification. The results are verified in time and frequency domain. Finally, the identified model provides a basis for later design of control strategies to reduce existing force variations.

Experimental Study of Unconfined and Confined Isothermal Swirling Jets

A 3C-2D PIV technique was applied to investigate the swirling flow generated by an axial plus tangential type swirl generator. This work is focused on the near-exit region of an isothermal swirling jet to characterize the effect of swirl on the flow field and to identify the large coherent structures both in unconfined and confined conditions for geometrical swirl number, Sg = 4.6. Effects of the Reynolds number on the flow structure were also studied. The experimental results show significant effects of the confinement on the mean velocity fields and its fluctuations. The size of the recirculation zone was significantly enlarged upon confinement compared to the free swirling jet. Increasing in the Reynolds number further enhanced the recirculation zone. The frequency characteristics have been measured with a capacitive microphone which indicates the presence of periodic oscillation related to the existence of precessing vortex core, PVC. Proper orthogonal decomposition of the jet velocity field was carried out, enabling the identification of coherent structures. The time coefficients of the first two most energetic POD modes were used to reconstruct the phase-averaged velocity field of the oscillatory motion in the swirling flow. The instantaneous minima of negative swirl strength values calculated from the instantaneous velocity field revealed the presence of two helical structures located in the inner and outer shear layers and this structure fade out at an axial location of approximately z/D = 1.5 for unconfined case and z/D = 1.2 for confined case. By phase averaging the instantaneous swirling strength maps, the 3D helical vortex structure was reconstructed.

Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.