Deterministic Random Number Generator Algorithm for Cryptosystem Keys

One of the crucial parameters of digital cryptographic systems is the selection of the keys used and their distribution. The randomness of the keys has a strong impact on the system’s security strength being difficult to be predicted, guessed, reproduced, or discovered by a cryptanalyst. Therefore, adequate key randomness generation is still sought for the benefit of stronger cryptosystems. This paper suggests an algorithm designed to generate and test pseudo random number sequences intended for cryptographic applications. This algorithm is based on mathematically manipulating a publically agreed upon information between sender and receiver over a public channel. This information is used as a seed for performing some mathematical functions in order to generate a sequence of pseudorandom numbers that will be used for encryption/decryption purposes. This manipulation involves permutations and substitutions that fulfill Shannon’s principle of “confusion and diffusion”. ASCII code characters were utilized in the generation process instead of using bit strings initially, which adds more flexibility in testing different seed values. Finally, the obtained results would indicate sound difficulty of guessing keys by attackers.

Bibliometric Analysis of the Impact of Funding on Scientific Development of Researchers

Every year, a considerable amount of money is being invested on research, mainly in the form of funding allocated to universities and research institutes. To better distribute the available funds and to set the most proper R&D investment strategies for the future, evaluation of the productivity of the funded researchers and the impact of such funding is crucial. In this paper, using the data on 15 years of journal publications of the NSERC (Natural Sciences and Engineering research Council of Canada) funded researchers and by means of bibliometric analysis, the scientific development of the funded researchers and their scientific collaboration patterns will be investigated in the period of 1996-2010. According to the results it seems that there is a positive relation between the average level of funding and quantity and quality of the scientific output. In addition, whenever funding allocated to the researchers has increased, the number of co-authors per paper has also augmented. Hence, the increase in the level of funding may enable researchers to get involved in larger projects and/or scientific teams and increase their scientific output respectively.

A Comparative Study of Malware Detection Techniques Using Machine Learning Methods

In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through (semi)-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.

Online Forums Hotspot Detection and Analysis Using Aging Theory

The exponential growth of social media arouses much attention on public opinion information. The online forums, blogs, micro blogs are proving to be extremely valuable resources and are having bulk volume of information. However, most of the social media data is unstructured and semi structured form. So that it is more difficult to decipher automatically. Therefore, it is very much essential to understand and analyze those data for making a right decision. The online forums hotspot detection is a promising research field in the web mining and it guides to motivate the user to take right decision in right time. The proposed system consist of a novel approach to detect a hotspot forum for any given time period. It uses aging theory to find the hot terms and E-K-means for detecting the hotspot forum. Experimental results demonstrate that the proposed approach outperforms k-means for detecting the hotspot forums with the improved accuracy.

Forecasting of Grape Juice Flavor by Using Support Vector Regression

The research of juice flavor forecasting has become more important in China. Due to the fast economic growth in China, many different kinds of juices have been introduced to the market. If a beverage company can understand their customers’ preference well, the juice can be served more attractive. Thus, this study intends to introducing the basic theory and computing process of grapes juice flavor forecasting based on support vector regression (SVR). Applying SVR, BPN, and LR to forecast the flavor of grapes juice in real data shows that SVR is more suitable and effective at predicting performance.

A Study of Behavioral Phenomena Using ANN

Behavioral aspects of experience such as will power are rarely subjected to quantitative study owing to the numerous complexities involved. Will is a phenomenon that has puzzled humanity for a long time. It is a belief that will power of an individual affects the success achieved by them in life. It is also thought that a person endowed with great will power can overcome even the most crippling setbacks in life while a person with a weak will cannot make the most of life even the greatest assets. This study is an attempt to subject the phenomena of will to the test of an artificial neural network through a computational model. The claim being tested is that will power of an individual largely determines success achieved in life. It is proposed that data pertaining to success of individuals be obtained from an experiment and the phenomenon of will be incorporated into the model, through data generated recursively using a relation between will and success characteristic to the model. An artificial neural network trained using part of the data, could subsequently be used to make predictions regarding data points in the rest of the model. The procedure would be tried for different models and the model where the networks predictions are found to be in greatest agreement with the data would be selected; and used for studying the relation between success and will.

Colour Image Compression Method Based On Fractal Block Coding Technique

Image compression based on fractal coding is a lossy compression method and normally used for gray level images range and domain blocks in rectangular shape. Fractal based digital image compression technique provide a large compression ratio and in this paper, it is proposed using YUV colour space and the fractal theory which is based on iterated transformation. Fractal geometry is mainly applied in the current study towards colour image compression coding. These colour images possesses correlations among the colour components and hence high compression ratio can be achieved by exploiting all these redundancies. The proposed method utilises the self-similarity in the colour image as well as the cross-correlations between them. Experimental results show that the greater compression ratio can be achieved with large domain blocks but more trade off in image quality is good to acceptable at less than 1 bit per pixel.

Social Network Analysis & Information Disclosure: A Case Study

The advent of social networking technologies has been met with mixed reactions in academic and corporate circles around the world. This study explored the influence of social network in current era, the relation being maintained between the Social networking site and its user by the extent of use, benefits and latest technologies. The study followed a descriptive research design wherein a questionnaire was used as the main research tool. The data collected was analyzed using SPSS 16. Data was gathered from 1205 users and analyzed in accordance with the objectives of the study. The analysis of the results seem to suggest that the majority of users were mainly using Facebook, despite of concerns raised about the disclosure of personal information on social network sites, users continue to disclose huge quantity of personal information, they find that reading privacy policy is time consuming and changes made can result into improper settings.

Natural-Direction-Consistent 3D-Design and Printing Methods

Objects are usually horizontally sliced when printed by 3D printers. Therefore, if an object to be printed, such as a collection of fibers, originally has natural direction in shape, the printed direction contradicts with the natural direction. By using proper tools, such as field-oriented 3D paint software, field-oriented solid modelers, field-based tool-path generation software, and non-horizontal FDM 3D printers, the natural direction can be modeled and objects can be printed in a direction that is consistent with the natural direction. This consistence results in embodiment of momentum or force in expressions of the printed object. To achieve this goal, several design and manufacturing problems, but not all, have been solved. An application of this method is (Japanese) 3D calligraphy.

Sensitive Analysis of the ZF Model for ABC Multi Criteria Inventory Classification

ABC classification is widely used by managers for inventory control. The classical ABC classification is based on Pareto principle and according to the criterion of the annual use value only. Single criterion classification is often insufficient for a closely inventory control. Multi-criteria inventory classification models have been proposed by researchers in order to consider other important criteria. From these models, we will consider a specific model in order to make a sensitive analysis on the composite score calculated for each item. In fact, this score, based on a normalized average between a good and a bad optimized index, can affect the ABC-item classification. We will focus on items differently assigned to classes and then propose a classification compromise.

A Cost Effective Approach to Develop Mid-size Enterprise Software Adopted the Waterfall Model

Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.

Performance Evaluation of Task Scheduling Algorithm on LCQ Network

The Scheduling and mapping of tasks on a set of processors is considered as a critical problem in parallel and distributed computing system. This paper deals with the problem of dynamic scheduling on a special type of multiprocessor architecture known as Linear Crossed Cube (LCQ) network. This proposed multiprocessor is a hybrid network which combines the features of both linear types of architectures as well as cube based architectures. Two standard dynamic scheduling schemes namely Minimum Distance Scheduling (MDS) and Two Round Scheduling (TRS) schemes are implemented on the LCQ network. Parallel tasks are mapped and the imbalance of load is evaluated on different set of processors in LCQ network. The simulations results are evaluated and effort is made by means of through analysis of the results to obtain the best solution for the given network in term of load imbalance left and execution time. The other performance matrices like speedup and efficiency are also evaluated with the given dynamic algorithms.

Optimized Weight Vector for QoS Aware Web Service Selection Algorithm Using Particle Swarm Optimization

Quality of Service (QoS) attributes as part of the service description is an important factor for service attribute. It is not easy to exactly quantify the weight of each QoS conditions since human judgments based on their preference causes vagueness. As web services selection requires optimization, evolutionary computing based on heuristics to select an optimal solution is adopted. In this work, the evolutionary computing technique Particle Swarm Optimization (PSO) is used for selecting a suitable web services based on the user’s weightage of each QoS values by optimizing the QoS weight vector and thereby finding the best weight vectors for best services that is being selected. Finally the results are compared and analyzed using static inertia weight and deterministic inertia weight of PSO.

Fault-Tolerant Optimal Broadcast Algorithm for the Hypercube Topology

This paper presents an optimal broadcast algorithm for the hypercube networks. The main focus of the paper is the effectiveness of the algorithm in the presence of many node faults. For the optimal solution, our algorithm builds with spanning tree connecting the all nodes of the networks, through which messages are propagated from source node to remaining nodes. At any given time, maximum n − 1 nodes may fail due to crashing. We show that the hypercube networks are strongly fault-tolerant. Simulation results analyze to accomplish algorithm characteristics under many node faults. We have compared our simulation results between our proposed method and the Fu’s method. Fu’s approach cannot tolerate n − 1 faulty nodes in the worst case, but our approach can tolerate n − 1 faulty nodes.

Triadic Relationship of Icon Design for Semi-Literate Communities

Icons, or pictorial and graphical objects, are commonly used in human-computer interaction (HCI) fields as the mediator in order to communicate information to users. Yet there has been little studies focusing on a majority of the world’s population – semi-literate communities – in terms of the fundamental knowhow for designing icons for such population. In this study, two sets of icons belonging in different icon taxonomy – abstract and concrete – are designed for a mobile application for semi-literate agricultural communities. In this paper, we propose a triadic relationship of an icon, namely meaning, task and mental image, which inherits the triadic relationship of a sign. User testing with the application and a post-pilot questionnaire are conducted as the experimental approach in two rural villages in India. Icons belonging to concrete taxonomy perform better than abstract icons on the premise that the design of the icon fulfills the underlying rules of the proposed triadic relationship.

Construction of Space-Filling Designs for Three Input Variables Computer Experiments

Latin hypercube designs (LHDs) have been applied in many computer experiments among the space-filling designs found in the literature. A LHD can be randomly generated but a randomly chosen LHD may have bad properties and thus act poorly in estimation and prediction. There is a connection between Latin squares and orthogonal arrays (OAs). A Latin square of order s involves an arrangement of s symbols in s rows and s columns, such that every symbol occurs once in each row and once in each column and this exists for every non-negative integer s. In this paper, a computer program was written to construct orthogonal array-based Latin hypercube designs (OA-LHDs). Orthogonal arrays (OAs) were constructed from Latin square of order s and the OAs constructed were afterward used to construct the desired Latin hypercube designs for three input variables for use in computer experiments. The LHDs constructed have better space-filling properties and they can be used in computer experiments that involve only three input factors. MATLAB 2012a computer package (www.mathworks.com/) was used for the development of the program that constructs the designs.

A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models

This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.

Developing a Web-Based Workflow Management System in Cloud Computing Platforms

Cloud computing is the innovative and leading information technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort. In this paper, we aim at the development of workflow management system for cloud computing platforms based on our previous research on the dynamic allocation of the cloud computing resources and its workflow process. We took advantage of the HTML5 technology and developed web-based workflow interface. In order to enable the combination of many tasks running on the cloud platform in sequence, we designed a mechanism and developed an execution engine for workflow management on clouds. We also established a prediction model which was integrated with job queuing system to estimate the waiting time and cost of the individual tasks on different computing nodes, therefore helping users achieve maximum performance at lowest payment. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for cloud computing platform. This development also helps boost user productivity by promoting a flexible workflow interface that lets users design and control their tasks' flow from anywhere.

Evaluation of a Surrogate Based Method for Global Optimization

We evaluate the performance of a numerical method for global optimization of expensive functions. The method is using a response surface to guide the search for the global optimum. This metamodel could be based on radial basis functions, kriging, or a combination of different models. We discuss how to set the cyclic parameters of the optimization method to get a balance between local and global search. We also discuss the eventual problem with Runge oscillations in the response surface.

Nonlinear Modeling of the PEMFC Based On NNARX Approach

Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accuracy of NNARX model are tested by one step ahead relating output voltage to input current from measured experimental of PEMFC. The results show that the obtained nonlinear NNARX model can efficiently approximate the dynamic mode of the PEMFC and model output and system measured output consistently.