Turkish Adolescents' Subjective Well-Being with Respect to Age, Gender and SES of Parents

In this research it is aimed that the effect of some demographic factors on Turkish Adolescents' subjective well being is investigated. 432 adolescents who are 247 girls and 185 boys are participated in this study. They are ages 15-17, and also are high school students. The Positive and Negative Affect Scale and Life Satisfaction Scale are used for measuring adolescents' subjective well being. The ANOVA method is used in order to examine the effect of ages. For gender differences, independent t-test method is used, and finally the Pearson Correlation method is used so as to examine the effect of socio economic statues of adolescents' parents. According to results, there is no gender difference on adolescents' subjective well being. On the other hand, SES and age are effect significantly lover level on adolescents' subjective well being.

Artificial Neural Network with Steepest Descent Backpropagation Training Algorithm for Modeling Inverse Kinematics of Manipulator

Inverse kinematics analysis plays an important role in developing a robot manipulator. But it is not too easy to derive the inverse kinematic equation of a robot manipulator especially robot manipulator which has numerous degree of freedom. This paper describes an application of Artificial Neural Network for modeling the inverse kinematics equation of a robot manipulator. In this case, the robot has three degree of freedoms and the robot was implemented for drilling a printed circuit board. The artificial neural network architecture used for modeling is a multilayer perceptron networks with steepest descent backpropagation training algorithm. The designed artificial neural network has 2 inputs, 2 outputs and varies in number of hidden layer. Experiments were done in variation of number of hidden layer and learning rate. Experimental results show that the best architecture of artificial neural network used for modeling inverse kinematics of is multilayer perceptron with 1 hidden layer and 38 neurons per hidden layer. This network resulted a RMSE value of 0.01474.

The Upconversion of co-doped Nd3+/Er3+Tellurite Glass

Series of tellurite glass of the system 78TeO2-10PbO- 10Li2O-(2-x)Nd2O3-xEr2O3, where x = 0.5, 1.0, 1.5 and 2.0 was successfully been made. A study of upconversion luminescence of the Nd3+/Er3+ co-doped tellurite glass has been carried out. From Judd-Ofelt analysis, the experimental lifetime, exp. τ of the glass serie are found higher in the visible region as they varies from 65.17ms to 114.63ms, whereas in the near infrared region (NIR) the lifetime are varies from 2.133ms to 2.270ms. Meanwhile, the emission cross section,σ results are found varies from 0.004 x 1020 cm2 to 1.007 x 1020 cm2 with respect to composition. The emission spectra of the glass are found been contributed from Nd3+ and Er3+ ions by which nine significant transition peaks are observed. The upconversion mechanism of the co-doped tellurite glass has been shown in the schematic energy diagrams. In this works, it is found that the excited state-absorption (ESA) is still dominant in the upconversion excitation process as the upconversion excitation mechanism of the Nd3+ excited-state levels is accomplished through a stepwise multiphonon process. An efficient excitation energy transfer (ET) has been observed between Nd3+ as a donor and Er3+ as the acceptor. As a result, respective emission spectra had been observed.

Biplot Analysis for Evaluation of Tolerance in Some Bean (Phaseolus vulgaris L.) Genotypes to Bean Common Mosaic Virus (BCMV)

The common bean is the most important grain legume for direct human consumption in the world and BCMV is one of the world's most serious bean diseases that can reduce yield and quality of harvested product. To determine the best tolerance index to BCMV and recognize tolerant genotypes, 2 experiments were conducted in field conditions. Twenty five common bean genotypes were sown in 2 separate RCB design with 3 replications under contamination and non-contamination conditions. On the basis of the results of indices correlations GMP, MP and HARM were determined as the most suitable tolerance indices. The results of principle components analysis indicated 2 first components totally explained 98.52% of variations among data. The first and second components were named potential yield and stress susceptible respectively. Based on the results of BCMV tolerance indices assessment and biplot analysis WA8563-4, WA8563-2 and Cardinal were the genotypes that exhibited potential seed yield under contamination and noncontamination conditions.

Video Mining for Creative Rendering

More and more home videos are being generated with the ever growing popularity of digital cameras and camcorders. For many home videos, a photo rendering, whether capturing a moment or a scene within the video, provides a complementary representation to the video. In this paper, a video motion mining framework for creative rendering is presented. The user-s capture intent is derived by analyzing video motions, and respective metadata is generated for each capture type. The metadata can be used in a number of applications, such as creating video thumbnail, generating panorama posters, and producing slideshows of video.

Cell Growth and Metabolites Produced by Fluorescent Pseudomonad R62 in Modified Chemically Defined Medium

Chemically defined Schlegel-s medium was modified to improve production of cell growth and other metabolites that are produced by fluorescent pseudomonad R62 strain. The modified medium does not require pH control as pH changes are kept within ± 0.2 units of the initial pH 7.1 during fermentation. The siderophore production was optimized for the fluorescent pseudomonad strain in the modified medium containing 1% glycerol as a major carbon source supplemented with 0.05% succinic acid and 0.5% Ltryptophan. Indole-3 acetic acid (IAA) production was higher when L-tryptophan was used at 0.5%. The 2,4- diacetylphloroglucinol (DAPG) was higher with amended three trace elements in medium. The optimized medium produced 2.28 g/l of dry cell mass and 900 mg/l of siderophore at the end of 36 h cultivation, while the production levels of IAA and DAPG were 65 mg/l and 81 mg/l respectively at the end of 48 h cultivation.

Meta Model Based EA for Complex Optimization

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.

Thermodynamic Analysis of Activated Carbon- CO2 based Adsorption Cooling Cycles

Heat powered solid sorption is a feasible alternative to electrical vapor compression refrigeration systems. In this paper, activated carbon (powder type Maxsorb and fiber type ACF-A10)- CO2 based adsorption cooling cycles are studied using the pressuretemperature- concentration (P-T-W) diagram. The specific cooling effect (SCE) and the coefficient of performance (COP) of these two cooling systems are simulated for the driving heat source temperatures ranging from 30 ºC to 90 ºC in terms of different cooling load temperatures with a cooling source temperature of 25 ºC. It is found from the present analysis that Maxsorb-CO2 couple shows higher cooling capacity and COP. The maximum COPs of Maxsorb-CO2 and ACF(A10)-CO2 based cooling systems are found to be 0.15 and 0.083, respectively. The main innovative feature of this cooling cycle is the ability to utilize low temperature waste heat or solar energy using CO2 as the refrigerant, which is one of the best alternative for applications where flammability and toxicity are not allowed.

Analysis of a Mathematical Model for Dengue Disease in Pregnant Cases

Dengue fever is an important human arboviral disease. Outbreaks are now reported quite often from many parts of the world. The number of cases involving pregnant women and infant cases are increasing every year. The illness is often severe and complications may occur. Deaths often occur because of the difficulties in early diagnosis and in the improper management of the diseases. Dengue antibodies from pregnant women are passed on to infants and this protects the infants from dengue infections. Antibodies from the mother are transferred to the fetus when it is still in the womb. In this study, we formulate a mathematical model to describe the transmission of this disease in pregnant women. The model is formulated by dividing the human population into pregnant women and non-pregnant human (men and non-pregnant women). Each class is subdivided into susceptible (S), infectious (I) and recovered (R) subclasses. We apply standard dynamical analysis to our model. Conditions for the local stability of the equilibrium points are given. The numerical simulations are shown. The bifurcation diagrams of our model are discussed. The control of this disease in pregnant women is discussed in terms of the threshold conditions.

Treatment of Oily Wastewater by Fibrous Coalescer Process: Stage Coalescer and Model Prediction

The coalescer process is one of the methods for oily water treatment by increasing the oil droplet size in order to enhance the separating velocity and thus effective separation. However, the presence of surfactants in an oily emulsion can limit the obtained mechanisms due to the small oil size related with stabilized emulsion. In this regard, the purpose of this research is to improve the efficiency of the coalescer process for treating the stabilized emulsion. The effects of bed types, bed height, liquid flow rate and stage coalescer (step-bed) on the treatment efficiencies in term of COD values were studied. Note that the treatment efficiency obtained experimentally was estimated by using the COD values and oil droplet size distribution. The study has shown that the plastic media has more effective to attach with oil particles than the stainless one due to their hydrophobic properties. Furthermore, the suitable bed height (3.5 cm) and step bed (3.5 cm with 2 steps) were necessary in order to well obtain the coalescer performance. The application of step bed coalescer process in reactor has provided the higher treatment efficiencies in term of COD removal than those obtained with classical process. The proposed model for predicting the area under curve and thus treatment efficiency, based on the single collector efficiency (ηT) and the attachment efficiency (α), provides relatively a good coincidence between the experimental and predicted values of treatment efficiencies in this study.

Segmentation of Ascending and Descending Aorta in CTA Images

In this study, a new and fast algorithm for Ascending Aorta (AscA) and Descending Aorta (DesA) segmentation is presented using Computed Tomography Angiography images. This process is quite important especially at the detection of aortic plaques, aneurysms, calcification or stenosis. The applied method has been carried out at four steps. At first step, lung segmentation is achieved. At the second one, Mediastinum Region (MR) is detected to use in the segmentation. At the third one, images have been applied optimal threshold and components which are outside of the MR were removed. Lastly, identifying and segmentation of AscA and DesA have been carried out. The performance of the applied method is found quite well for radiologists and it gives enough results to the surgeries medically.

Business Diversification Strategies in the Italian Energy Markets

The liberalization and privatization processes have forced public utility companies to face new competitive challenges, implementing strategies to gain market share and, at the same time, keep the old customers. To this end, many companies have carried out mergers, acquisitions and conglomerations in order to diversify their business. This paper focuses on companies operating in the free energy market in Italy. In the last decade, this sector has undergone profound changes that have radically changed the competitive scenario and have led companies to implement diversification strategies of the business. Our work aims to evaluate the economic and financial performances obtained by energy companies, following the beginning of the liberalization process, verifying the possible relationship with the implemented diversification strategies.

A Force Measurement Evaluation Tool for Telerobotic Cutting Applications: Development of an Effective Characterization Platform

Sensorized instruments that accurately measure the interaction forces (between biological tissue and instrument endeffector) during surgical procedures offer surgeons a greater sense of immersion during minimally invasive robotic surgery. Although there is ongoing research into force measurement involving surgical graspers little corresponding effort has been carried out on the measurement of forces between scissor blades and tissue. This paper presents the design and development of a force measurement test apparatus, which will serve as a sensor characterization and evaluation platform. The primary aim of the experiments is to ascertain whether the system can differentiate between tissue samples with differing mechanical properties in a reliable, repeatable manner. Force-angular displacement curves highlight trends in the cutting process as well the forces generated along the blade during a cutting procedure. Future applications of the test equipment will involve the assessment of new direct force sensing technologies for telerobotic surgery.

The Use of Information for Inventory Decision in the Healthcare Industry

In this study, we explore the use of information for inventory decision in the healthcare organization (HO). We consider the scenario when the HO can make use of the information collected from some correlated products to enhance its inventory planning. Motivated by our real world observations that HOs adopt RFID and bar-coding system for information collection purpose, we examine the effectiveness of these systems for inventory planning with Bayesian information updating. We derive the optimal ordering decision and study the issue of Pareto improvement in the supply chain. Our analysis demonstrates that RFID system will outperform the bar-coding system when the RFID system installation cost and the tag cost reduce to a level that is comparable with that of the barcoding system. We also show how an appropriately set wholesale pricing contract can achieve Pareto improvement in the HO supply chain.

The Role Played by Swift Change of the Stability Characteristic of Mean Flow in Bypass Transition

The scenario of bypass transition is generally described as follows: the low-frequency disturbances in the free-stream may generate long stream-wise streaks in the boundary layer, which later may trigger secondary instability, leading to rapid increase of high-frequency disturbances. Then possibly turbulent spots emerge, and through their merging, lead to fully developed turbulence. This description, however, is insufficient in the sense that it does not provide the inherent mechanism of transition that during the transition, a large number of waves with different frequencies and wave numbers appear almost simultaneously, producing sufficiently large Reynolds stress, so the mean flow profile can change rapidly from laminar to turbulent. In this paper, such a mechanism will be figured out from analyzing DNS data of transition.

Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Manual Testing of Web Software Systems Supported by Direct Guidance of the Tester Based On Design Model

Software testing is important stage of software development cycle. Current testing process involves tester and electronic documents with test case scenarios. In this paper we focus on new approach to testing process using automated test case generation and tester guidance through the system based on the model of the system. Test case generation and model-based testing is not possible without proper system model. We aim on providing better feedback from the testing process thus eliminating the unnecessary paper work.

Transmission Model for Plasmodium Vivax Malaria: Conditions for Bifurcation

Plasmodium vivax malaria differs from P. falciparum malaria in that a person suffering from P. vivax infection can suffer relapses of the disease. This is due the parasite being able to remain dormant in the liver of the patients where it is able to re-infect the patient after a passage of time. During this stage, the patient is classified as being in the dormant class. The model to describe the transmission of P. vivax malaria consists of a human population divided into four classes, the susceptible, the infected, the dormant and the recovered. The effect of a time delay on the transmission of this disease is studied. The time delay is the period in which the P. vivax parasite develops inside the mosquito (vector) before the vector becomes infectious (i.e., pass on the infection). We analyze our model by using standard dynamic modeling method. Two stable equilibrium states, a disease free state E0 and an endemic state E1, are found to be possible. It is found that the E0 state is stable when a newly defined basic reproduction number G is less than one. If G is greater than one the endemic state E1 is stable. The conditions for the endemic equilibrium state E1 to be a stable spiral node are established. For realistic values of the parameters in the model, it is found that solutions in phase space are trajectories spiraling into the endemic state. It is shown that the limit cycle and chaotic behaviors can only be achieved with unrealistic parameter values.

Color Constancy using Superpixel

Color constancy algorithms are generally based on the simplified assumption about the spectral distribution or the reflection attributes of the scene surface. However, in reality, these assumptions are too restrictive. The methodology is proposed to extend existing algorithm to applying color constancy locally to image patches rather than globally to the entire images. In this paper, a method based on low-level image features using superpixels is proposed. Superpixel segmentation partition an image into regions that are approximately uniform in size and shape. Instead of using entire pixel set for estimating the illuminant, only superpixels with the most valuable information are used. Based on large scale experiments on real-world scenes, it can be derived that the estimation is more accurate using superpixels than when using the entire image.