Does Effective Social Policy Guarantee Happiness?

In the paper it is questioned whether effective state social policy provides happiness and social progress. For this purpose selected correlations between Human Development Index (HDI), share of public social expenditures in GDP, the Happy Planet Index (HPI), GDP per capita, and Government Effectiveness are examined and the results are graphically presented. It is shown how a government can affect well-being and happiness in different countries of modern world. Also, it is tested the hypothesis about existence of a certain optimum of well-being and public social expenditures, which affect direction of social progress. It is concluded that efficient social policy and wealth are not the only factors determining human happiness.

On the Numerical Approach for Simulating Thermal Hydraulics under Seismic Condition

The two-phase flow field and the motion of the free surface in an oscillating channel are simulated numerically to assess the methodology for simulating nuclear reacotr thermal hydraulics under seismic conditions. Two numerical methods are compared: one is to model the oscillating channel directly using the moving grid of the Arbitrary Lagrangian-Eulerian method, and the other is to simulate the effect of channel motion using the oscillating acceleration acting on the fluid in the stationary channel. The two-phase flow field in the oscillating channel is simulated using the level set method in both cases. The calculated results using the oscillating acceleration are found to coinside with those using the moving grid, and the theoretical back ground and the limitation of oscillating acceleration are discussed. It is shown that the change in the interfacial area between liquid and gas phases under seismic conditions is important for nuclear reactor thermal hydraulics.

Exponential Particle Swarm Optimization Approach for Improving Data Clustering

In this paper we use exponential particle swarm optimization (EPSO) to cluster data. Then we compare between (EPSO) clustering algorithm which depends on exponential variation for the inertia weight and particle swarm optimization (PSO) clustering algorithm which depends on linear inertia weight. This comparison is evaluated on five data sets. The experimental results show that EPSO clustering algorithm increases the possibility to find the optimal positions as it decrease the number of failure. Also show that (EPSO) clustering algorithm has a smaller quantization error than (PSO) clustering algorithm, i.e. (EPSO) clustering algorithm more accurate than (PSO) clustering algorithm.

Speaker Independent Quranic Recognizer Basedon Maximum Likelihood Linear Regression

An automatic speech recognition system for the formal Arabic language is needed. The Quran is the most formal spoken book in Arabic, it is spoken all over the world. In this research, an automatic speech recognizer for Quranic based speakerindependent was developed and tested. The system was developed based on the tri-phone Hidden Markov Model and Maximum Likelihood Linear Regression (MLLR). The MLLR computes a set of transformations which reduces the mismatch between an initial model set and the adaptation data. It uses the regression class tree, as well as, estimates a set of linear transformations for the mean and variance parameters of a Gaussian mixture HMM system. The 30th Chapter of the Quran, with five of the most famous readers of the Quran, was used for the training and testing of the data. The chapter includes about 2000 distinct words. The advantages of using the Quranic verses as the database in this developed recognizer are the uniqueness of the words and the high level of orderliness between verses. The level of accuracy from the tested data ranged 68 to 85%.

Numerical Analysis of Wind Loads on a Hemicylindrical Roof Building

The flow field over a three dimensional pole barn characterized by a cylindrical roof has been numerically investigated. Wind pressure and viscous loads acting on the agricultural building have been analyzed for several incoming wind directions, so as to evaluate the most critical load condition on the structure. A constant wind velocity profile, based on the maximum reference wind speed in the building site (peak gust speed worked out for 50 years return period) and on the local roughness coefficient, has been simulated. In order to contemplate also the hazard due to potential air wedging between the stored hay and the lower part of the ceiling, the effect of a partial filling of the barn has been investigated. The distribution of wind-induced loads on the structure have been determined, allowing a numerical quantification of the effect of wind direction on the induced stresses acting on a hemicylindrical roof.

A New Type of Integration Error and its Influence on Integration Testing Techniques

Testing is an activity that is required both in the development and maintenance of the software development life cycle in which Integration Testing is an important activity. Integration testing is based on the specification and functionality of the software and thus could be called black-box testing technique. The purpose of integration testing is testing integration between software components. In function or system testing, the concern is with overall behavior and whether the software meets its functional specifications or performance characteristics or how well the software and hardware work together. This explains the importance and necessity of IT for which the emphasis is on interactions between modules and their interfaces. Software errors should be discovered early during IT to reduce the costs of correction. This paper introduces a new type of integration error, presenting an overview of Integration Testing techniques with comparison of each technique and also identifying which technique detects what type of error.

The Effects of Whole-Body Vibration Training on Jump Performance in Handball Athletes

This study examined the effects of eight weeks of whole-body vibration training (WBVT) on vertical and decuple jump performance in handball athletes. Sixteen collegiate Level I handball athletes volunteered for this study. They were divided equally as control group and experimental group (EG). During the period of the study, all athletes underwent the same handball specific training, but the EG received additional WBVT (amplitude: 2 mm, frequency: 20 - 40 Hz) three time per week for eight consecutive weeks. The vertical jump performance was evaluated according to the maximum height of squat jump (SJ) and countermovement jump (CMJ). Single factor ANCOVA was used to examine the differences in each parameter between the groups after training with the pretest values as a covariate. The statistic significance was set at p < .05. After 8 weeks WBVT, the EG had significantly improved the maximal height of SJ (40.92 ± 2.96 cm vs. 48.40 ± 4.70 cm, F = 5.14, p < .05) and the maximal height CMJ (47.25 ± 7.48 cm vs. 52.20 ± 6.25 cm, F = 5.31, p < .05). 8 weeks of additional WBVT could improve the vertical and decuple jump performance in handball athletes. Enhanced motor unit synchronization and firing rates, facilitated muscular contraction stretch-shortening cycle, and improved lower extremity neuromuscular coordination could account for these enhancements.

Promoting Mathematical Understanding Using ICT in Teaching and Learning

Information and Communication Technologies (ICT) in mathematical education is a very active field of research and innovation, where learning is understood to be meaningful and grasping multiple linked representation rather than rote memorization, a great amount of literature offering a wide range of theories, learning approaches, methodologies and interpretations, are generally stressing the potentialities for teaching and learning using ICT. Despite the utilization of new learning approaches with ICT, students experience difficulties in learning concepts relevant to understanding mathematics, much remains unclear about the relationship between the computer environment, the activities it might support, and the knowledge that might emerge from such activities. Many questions that might arise in this regard: to what extent does the use of ICT help students in the process of understanding and solving tasks or problems? Is it possible to identify what aspects or features of students' mathematical learning can be enhanced by the use of technology? This paper will highlight the interest of the integration of information and communication technologies (ICT) into the teaching and learning of mathematics (quadratic functions), it aims to investigate the effect of four instructional methods on students- mathematical understanding and problem solving. Quantitative and qualitative methods are used to report about 43 students in middle school. Results showed that mathematical thinking and problem solving evolves as students engage with ICT activities and learn cooperatively.

Using Artificial Neural Network to Forecast Groundwater Depth in Union County Well

A concern that researchers usually face in different applications of Artificial Neural Network (ANN) is determination of the size of effective domain in time series. In this paper, trial and error method was used on groundwater depth time series to determine the size of effective domain in the series in an observation well in Union County, New Jersey, U.S. different domains of 20, 40, 60, 80, 100, and 120 preceding day were examined and the 80 days was considered as effective length of the domain. Data sets in different domains were fed to a Feed Forward Back Propagation ANN with one hidden layer and the groundwater depths were forecasted. Root Mean Square Error (RMSE) and the correlation factor (R2) of estimated and observed groundwater depths for all domains were determined. In general, groundwater depth forecast improved, as evidenced by lower RMSEs and higher R2s, when the domain length increased from 20 to 120. However, 80 days was selected as the effective domain because the improvement was less than 1% beyond that. Forecasted ground water depths utilizing measured daily data (set #1) and data averaged over the effective domain (set #2) were compared. It was postulated that more accurate nature of measured daily data was the reason for a better forecast with lower RMSE (0.1027 m compared to 0.255 m) in set #1. However, the size of input data in this set was 80 times the size of input data in set #2; a factor that may increase the computational effort unpredictably. It was concluded that 80 daily data may be successfully utilized to lower the size of input data sets considerably, while maintaining the effective information in the data set.

Optimization of Protein Hydrolysate Production Process from Jatropha curcas Cake

This was the first document revealing the investigation of protein hydrolysate production optimization from J. curcas cake. Proximate analysis of raw material showed 18.98% protein, 5.31% ash, 8.52% moisture and 12.18% lipid. The appropriate protein hydrolysate production process began with grinding the J. curcas cake into small pieces. Then it was suspended in 2.5% sodium hydroxide solution with ratio between solution/ J. curcas cake at 80:1 (v/w). The hydrolysis reaction was controlled at temperature 50 °C in water bath for 45 minutes. After that, the supernatant (protein hydrolysate) was separated using centrifuge at 8000g for 30 minutes. The maximum yield of resulting protein hydrolysate was 73.27 % with 7.34% moisture, 71.69% total protein, 7.12% lipid, 2.49% ash. The product was also capable of well dissolving in water.

A Novel Multiple Valued Logic OHRNS Modulo rn Adder Circuit

Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.

Real Power Generation Scheduling to Improve Steady State Stability Limit in the Java-Bali 500kV Interconnection Power System

This paper will discuss about an active power generator scheduling method in order to increase the limit level of steady state systems. Some power generator optimization methods such as Langrange, PLN (Indonesian electricity company) Operation, and the proposed Z-Thevenin-based method will be studied and compared in respect of their steady state aspects. A method proposed in this paper is built upon the thevenin equivalent impedance values between each load respected to each generator. The steady state stability index obtained with the REI DIMO method. This research will review the 500kV-Jawa-Bali interconnection system. The simulation results show that the proposed method has the highest limit level of steady state stability compared to other optimization techniques such as Lagrange, and PLN operation. Thus, the proposed method can be used to create the steady state stability limit of the system especially in the peak load condition.

Extraction of Symbolic Rules from Artificial Neural Networks

Although backpropagation ANNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability.

Study of Features for Hand-printed Recognition

The feature extraction method(s) used to recognize hand-printed characters play an important role in ICR applications. In order to achieve high recognition rate for a recognition system, the choice of a feature that suits for the given script is certainly an important task. Even if a new feature required to be designed for a given script, it is essential to know the recognition ability of the existing features for that script. Devanagari script is being used in various Indian languages besides Hindi the mother tongue of majority of Indians. This research examines a variety of feature extraction approaches, which have been used in various ICR/OCR applications, in context to Devanagari hand-printed script. The study is conducted theoretically and experimentally on more that 10 feature extraction methods. The various feature extraction methods have been evaluated on Devanagari hand-printed database comprising more than 25000 characters belonging to 43 alphabets. The recognition ability of the features have been evaluated using three classifiers i.e. k-NN, MLP and SVM.

Evaluation of State of the Art IDS Message Exchange Protocols

During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.

Software Maintenance Severity Prediction for Object Oriented Systems

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Multi-Agents Coordination Model in Inter- Organizational Workflow: Applying in Egovernment

Inter-organizational Workflow (IOW) is commonly used to support the collaboration between heterogeneous and distributed business processes of different autonomous organizations in order to achieve a common goal. E-government is considered as an application field of IOW. The coordination of the different organizations is the fundamental problem in IOW and remains the major cause of failure in e-government projects. In this paper, we introduce a new coordination model for IOW that improves the collaboration between government administrations and that respects IOW requirements applied to e-government. For this purpose, we adopt a Multi-Agent approach, which deals more easily with interorganizational digital government characteristics: distribution, heterogeneity and autonomy. Our model integrates also different technologies to deal with the semantic and technologic interoperability. Moreover, it conserves the existing systems of government administrations by offering a distributed coordination based on interfaces communication. This is especially applied in developing countries, where administrations are not necessary equipped with workflow systems. The use of our coordination techniques allows an easier migration for an e-government solution and with a lower cost. To illustrate the applicability of the proposed model, we present a case study of an identity card creation in Tunisia.

Digital Redesign of Interval Systems via Particle Swarm Optimization

In this paper, a PSO-based approach is proposed to derive a digital controller for redesigned digital systems having an interval plant based on resemblance of the extremal gain/phase margins. By combining the interval plant and a controller as an interval system, extremal GM/PM associated with the loop transfer function can be obtained. The design problem is then formulated as an optimization problem of an aggregated error function revealing the deviation on the extremal GM/PM between the redesigned digital system and its continuous counterpart, and subsequently optimized by a proposed PSO to obtain an optimal set of parameters for the digital controller. Computer simulations have shown that frequency responses of the redesigned digital system having an interval plant bare a better resemblance to its continuous-time counter part by the incorporation of a PSO-derived digital controller in comparison to those obtained using existing open-loop discretization methods.

Interactive Model Based On an Extended CPN

The UML modeling of complex distributed systems often is a great challenge due to the large amount of parallel real-time operating components. In this paper the problems of verification of such systems are discussed. ECPN, an Extended Colored Petri Net is defined to formally describe state transitions of components and interactions among components. The relationship between sequence diagrams and Free Choice Petri Nets is investigated. Free Choice Petri Net theory helps verifying the liveness of sequence diagrams. By converting sequence diagrams to ECPNs and then comparing behaviors of sequence diagram ECPNs and statecharts, the consistency among models is analyzed. Finally, a verification process for an example model is demonstrated.

A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.