Knowledge, Perceptions and Acceptability to Strengthening Adolescents’ Sexual and Reproductive Health Education amongst Secondary Schools in Gulu District

Adolescents in Northern Uganda are at risk of teenage pregnancies, unsafe abortions and sexually transmitted infections (STIs). There is silence on sex both at home and school. This cross sectional descriptive analytical study interviews a random sample of 827 students and 13 teachers on knowledge, perception and acceptability to a comprehensive adolescent sexual and reproductive health education in “O” and “A” level secondary schools in Gulu District. Quantitative data was analyzed using SPSS 16.0. Directed content analysis of themes of transcribed qualitative data was conducted manually for common codes, sub-categories and categories. Of the 827 students; 54.3% (449) reported being in a sexual relationship especially those aged 15-17 years. Majority 96.1% (807) supported the teaching of a comprehensive ASRHE, citing no negative impact 71.5% (601). Majority 81.6% (686) agreed that such education could help prevention of STIs, abortions and teenage pregnancies, and that it should be taught by health workers 69.0% (580). Majority 76.6% (203) reported that ASRHE was not currently being taught in their schools. Students had low knowledge levels and misconceptions about ASRHE. ASRHE was highly acceptable though not being emphasized; its success in school settings requires multidisciplinary culturally sensitive approaches amongst which health workers should be frontiers.

A Model of Network Security with Prevention Capability by Using Decoy Technique

This research work proposes a model of network security systems aiming to prevent production system in a data center from being attacked by intrusions. Conceptually, we introduce a decoy system as a part of the security system for luring intrusions, and apply network intrusion detection (NIDS), coupled with the decoy system to perform intrusion prevention. When NIDS detects an activity of intrusions, it will signal a redirection module to redirect all malicious traffics to attack the decoy system instead, and hence the production system is protected and safe. However, in a normal situation, traffic will be simply forwarded to the production system as usual. Furthermore, we assess the performance of the model with various bandwidths, packet sizes and inter-attack intervals (attacking frequencies).

Info-participation of the Disabled Using the Mixed Preference Data in Improving Their Travel Quality

Today, the preferences and participation of the TD groups such as the elderly and disabled is still lacking in decision-making of transportation planning, and their reactions to certain type of policies are not well known. Thus, a clear methodology is needed. This study aimed to develop a method to extract the preferences of the disabled to be used in the policy-making stage that can also guide to future estimations. The method utilizes the combination of cluster analysis and data filtering using the data of the Arao city (Japan). The method is a process that follows: defining the TD group by the cluster analysis tool, their travel preferences in tabular form from the household surveys by policy variableimpact pairs, zones, and by trip purposes, and the final outcome is the preference probabilities of the disabled. The preferences vary by trip purpose; for the work trips, accessibility and transit system quality policies with the accompanying impacts of modal shifts towards public mode use as well as the decreasing travel costs, and the trip rate increase; for the social trips, the same accessibility and transit system policies leading to the same mode shift impact, together with the travel quality policy area leading to trip rate increase. These results explain the policies to focus and can be used in scenario generation in models, or any other planning purpose as decision support tool.

Auto Classification for Search Intelligence

This paper proposes an auto-classification algorithm of Web pages using Data mining techniques. We consider the problem of discovering association rules between terms in a set of Web pages belonging to a category in a search engine database, and present an auto-classification algorithm for solving this problem that are fundamentally based on Apriori algorithm. The proposed technique has two phases. The first phase is a training phase where human experts determines the categories of different Web pages, and the supervised Data mining algorithm will combine these categories with appropriate weighted index terms according to the highest supported rules among the most frequent words. The second phase is the categorization phase where a web crawler will crawl through the World Wide Web to build a database categorized according to the result of the data mining approach. This database contains URLs and their categories.

Development of a Software about Calculating the Production Parameters in Knitted Garment Plants

Apparel product development is an important stage in the life cycle of a product. Shortening this stage will help to reduce the costs of a garment. The aim of this study is to examine the production parameters in knitwear apparel companies by defining the unit costs, and developing a software to calculate the unit costs of garments and make the cost estimates. In this study, with the help of a questionnaire, different companies- systems of unit cost estimating and cost calculating were tried to be analyzed. Within the scope of the questionnaire, the importance of cost estimating process for apparel companies and the expectations from a new cost estimating program were investigated. According to the results of the questionnaire, it was seen that the majority of companies which participated to the questionnaire use manual cost calculating methods or simple Microsoft Excel spreadsheets to make cost estimates. Furthermore, it was discovered that many companies meet with difficulties in archiving the cost data for future use and as a solution to that problem, it is thought that prior to making a cost estimate, sub units of garment costs which are fabric, accessory and the labor costs should be analyzed and added to the database of the programme beforehand. Another specification of the cost estimating unit prepared in this study is that the programme was designed to consist of two main units, one of which makes the product specification and the other makes the cost calculation. The programme is prepared as a web-based application in order that the supplier, the manufacturer and the customer can have the opportunity to communicate through the same platform.

A Novel Fuzzy Logic Based Controller to Adjust the Brightness of the Television Screen with Respect to Surrounding Light

One of the major cause of eye strain and other problems caused while watching television is the relative illumination between the screen and its surrounding. This can be overcome by adjusting the brightness of the screen with respect to the surrounding light. A controller based on fuzzy logic is proposed in this paper. The fuzzy controller takes in the intensity of light surrounding the screen and the present brightness of the screen as input. The output of the fuzzy controller is the grid voltage corresponding to the required brightness. This voltage is given to CRT and brightness is controller dynamically. For the given test system data, different de-fuzzifier methods have been implemented and the results are compared. In order to validate the effectiveness of the proposed approach, a fuzzy controller has been designed by obtaining a test data from a real time system. The simulations are performed in MATLAB and are verified with standard system data. The proposed approach can be implemented for real time applications.

Web Log Mining by an Improved AprioriAll Algorithm

This paper sets forth the possibility and importance about applying Data Mining in Web logs mining and shows some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm which has been used in Web logs mining widely. The new algorithm adds the property of the User ID during the every step of producing the candidate set and every step of scanning the database by which to decide whether an item in the candidate set should be put into the large set which will be used to produce next candidate set. At the meantime, in order to reduce the number of the database scanning, the new algorithm, by using the property of the Apriori algorithm, limits the size of the candidate set in time whenever it is produced. Test results show the improved algorithm has a more lower complexity of time and space, better restrain noise and fit the capacity of memory.

Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Decision Support System Based on Data Warehouse

Typical Intelligent Decision Support System is 4-based, its design composes of Data Warehouse, Online Analytical Processing, Data Mining and Decision Supporting based on models, which is called Decision Support System Based on Data Warehouse (DSSBDW). This way takes ETL,OLAP and DM as its implementing means, and integrates traditional model-driving DSS and data-driving DSS into a whole. For this kind of problem, this paper analyzes the DSSBDW architecture and DW model, and discusses the following key issues: ETL designing and Realization; metadata managing technology using XML; SQL implementing, optimizing performance, data mapping in OLAP; lastly, it illustrates the designing principle and method of DW in DSSBDW.

A Software Tool Design for Cerebral Infarction of MR Images

The brain MR imaging-based clinical research and analysis system were specifically built and the development for a large-scale data was targeted. We used the general clinical data available for building large-scale data. Registration period for the selection of the lesion ROI and the region growing algorithm was used and the Mesh-warp algorithm for matching was implemented. The accuracy of the matching errors was modified individually. Also, the large ROI research data can accumulate by our developed compression method. In this way, the correctly decision criteria to the research result was suggested. The experimental groups were age, sex, MR type, patient ID and smoking which can easily be queries. The result data was visualized of the overlapped images by a color table. Its data was calculated by the statistical package. The evaluation for the utilization of this system in the chronic ischemic damage in the area has done from patients with the acute cerebral infarction. This is the cause of neurologic disability index location in the center portion of the lateral ventricle facing. The corona radiate was found in the position. Finally, the system reliability was measured both inter-user and intra-user registering correlation.

Time-Derivative Estimation of Noisy Movie Data using Adaptive Control Theory

This paper presents an adaptive differentiator of sequential data based on the adaptive control theory. The algorithm is applied to detect moving objects by estimating a temporal gradient of sequential data at a specified pixel. We adopt two nonlinear intensity functions to reduce the influence of noises. The derivatives of the nonlinear intensity functions are estimated by an adaptive observer with σ-modification update law.

Flowability and Strength Development Characteristics of Bottom Ash Based Geopolymer

Despite of the preponderant role played by cement among the construction materials, it is today considered as a material destructing the environment due to the large quantities of carbon dioxide exhausted during its manufacture. Besides, global warming is now recognized worldwide as the new threat to the humankind against which advanced countries are investigating measures to reduce the current amount of exhausted gases to the half by 2050. Accordingly, efforts to reduce green gases are exerted in all industrial fields. Especially, the cement industry strives to reduce the consumption of cement through the development of alkali-activated geopolymer mortars using industrial byproducts like bottom ash. This study intends to gather basic data on the flowability and strength development characteristics of alkali-activated geopolymer mortar by examining its FT-IT features with respect to the effects and strength of the alkali-activator in order to develop bottom ash-based alkali-activated geopolymer mortar. The results show that the 35:65 mass ratio of sodium hydroxide to sodium silicate is appropriate and that a molarity of 9M for sodium hydroxide is advantageous. The ratio of the alkali-activators to bottom ash is seen to have poor effect on the strength. Moreover, the FT-IR analysis reveals that larger improvement of the strength shifts the peak from 1060 cm–1 (T-O, T=Si or Al) toward shorter wavenumber.

A Semantic Web Based Ontology in the Financial Domain

The paper describes design of an ontology in the financial domain for mutual funds. The design of this ontology consists of four steps, namely, specification, knowledge acquisition, implementation and semantic query. Specification includes a description of the taxonomy and different types mutual funds and their scope. Knowledge acquisition involves the information extraction from heterogeneous resources. Implementation describes the conceptualization and encoding of this data. Finally, semantic query permits complex queries to integrated data, mapping of these database entities to ontological concepts.

Fault Detection of Pipeline in Water Distribution Network System

Water pipe network is installed underground and once equipped, it is difficult to recognize the state of pipes when the leak or burst happens. Accordingly, post management is often delayed after the fault occurs. Therefore, the systematic fault management system of water pipe network is required to prevent the accident and minimize the loss. In this work, we develop online fault detection system of water pipe network using data of pipes such as flow rate or pressure. The transient model describing water flow in pipelines is presented and simulated using MATLAB. The fault situations such as the leak or burst can be also simulated and flow rate or pressure data when the fault happens are collected. Faults are detected using statistical methods of fast Fourier transform and discrete wavelet transform, and they are compared to find which method shows the better fault detection performance.

Organizational Decision Based on Business Intelligence

Nowadays, obtaining traditional statistics and reports is not adequate for the needs of organizational managers. The managers need to analyze and to transform the raw data into knowledge in the world filled with information. Therefore in this regard various processes have been developed. In the meantime the artificial intelligence-based processes are used and the new topics such as business intelligence and knowledge discovery have emerged. In the current paper it is sought to study the business intelligence and its applications in the organizations.

Visualization of Sediment Thickness Variation for Sea Bed Logging using Spline Interpolation

This paper discusses on the use of Spline Interpolation and Mean Square Error (MSE) as tools to process data acquired from the developed simulator that shall replicate sea bed logging environment. Sea bed logging (SBL) is a new technique that uses marine controlled source electromagnetic (CSEM) sounding technique and is proven to be very successful in detecting and characterizing hydrocarbon reservoirs in deep water area by using resistivity contrasts. It uses very low frequency of 0.1Hz to 10 Hz to obtain greater wavelength. In this work the in house built simulator was used and was provided with predefined parameters and the transmitted frequency was varied for sediment thickness of 1000m to 4000m for environment with and without hydrocarbon. From series of simulations, synthetics data were generated. These data were interpolated using Spline interpolation technique (degree of three) and mean square error (MSE) were calculated between original data and interpolated data. Comparisons were made by studying the trends and relationship between frequency and sediment thickness based on the MSE calculated. It was found that the MSE was on increasing trends in the set up that has the presence of hydrocarbon in the setting than the one without. The MSE was also on decreasing trends as sediment thickness was increased and with higher transmitted frequency.

Expert System for Chose Material used Gears

In order to give high expertise the computer aided design of mechanical systems involves specific activities focused on processing two type of information: knowledge and data. Expert rule based knowledge is generally processing qualitative information and involves searching for proper solutions and their combination into synthetic variant. Data processing is based on computational models and it is supposed to be inter-related with reasoning in the knowledge processing. In this paper an Intelligent Integrated System is proposed, for the objective of choosing the adequate material. The software is developed in Prolog – Flex software and takes into account various constraints that appear in the accurate operation of gears.

Improving Air Temperature Prediction with Artificial Neural Networks

The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. Previous work established that the Ward-style artificial neural network (ANN) is a suitable tool for developing such models. The current research focused on developing ANN models with reduced average prediction error by increasing the number of distinct observations used in training, adding additional input terms that describe the date of an observation, increasing the duration of prior weather data included in each observation, and reexamining the number of hidden nodes used in the network. Models were created to predict air temperature at hourly intervals from one to 12 hours ahead. Each ANN model, consisting of a network architecture and set of associated parameters, was evaluated by instantiating and training 30 networks and calculating the mean absolute error (MAE) of the resulting networks for some set of input patterns. The inclusion of seasonal input terms, up to 24 hours of prior weather information, and a larger number of processing nodes were some of the improvements that reduced average prediction error compared to previous research across all horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or 12.5%, less than the previous model. Prediction MAEs eight and 12 hours ahead improved by 0.17°C and 0.16°C, respectively, improvements of 7.4% and 5.9% over the existing model at these horizons. Networks instantiating the same model but with different initial random weights often led to different prediction errors. These results strongly suggest that ANN model developers should consider instantiating and training multiple networks with different initial weights to establish preferred model parameters.

Destination of the Solid Waste Generated at the Agricultural Products Wholesale Market in Brazil

The Brazilian Agricultural Products Wholesale Market fits well as example of residues generating system, reaching 750 metric tons per month of total residues, from which 600 metric tons are organic material and 150 metric tons are recyclable materials. Organic material is basically composed of fruit, vegetables and flowers leftovers from the products commercialization. The recyclable compounds are generate from packing material employed in the commercialization process. This research work devoted efforts in carrying quantitative analysis of the residues generated in the agricultural enterprise at its final destination. Data survey followed the directions implemented by the Residues Management Program issued by the agricultural enterprise. It was noticed from that analysis the necessity of changing the logistics applied to the recyclable material collecting process. However, composting process was elected as the organic compounds destination which is considered adequate for a material composed of significant percentage of organic matter far higher than wood, cardboard and plastics contents.

Characteristics of E-waste Recycling Systems in Japan and China

This study aims to identify processes, current situations, and issues of recycling systems for four home appliances, namely, air conditioners, television receivers, refrigerators, and washing machines, among e-wastes in China and Japan for understanding and comparison of their characteristics. In accordance with results of a literature search, review of information disclosed online, and questionnaire survey conducted, conclusions of the study boil down to: (1)The results show that in Japan most of the home appliances mentioned above have been collected through home appliance recycling tickets, resulting in an issue of “requiring some effort" in treatment and recycling stages, and most plants have contracted out their e-waste recycling. (2)It is found out that advantages of the recycling system in Japan include easiness to monitor concrete data and thorough environmental friendliness ensured while its disadvantages include illegal dumping and export. It becomes apparent that advantages of the recycling system in China include a high reuse rate, low treatment cost, and fewer illegal dumping while its disadvantages include less safe reused products, environmental pollution caused by e-waste treatment, illegal import, and difficulty in obtaining data.