A Study of Recent Contribution on Simulation Tools for Network-on-Chip

The growth in the number of Intellectual Properties (IPs) or the number of cores on the same chip becomes a critical issue in System-on-Chip (SoC) due to the intra-communication problem between the chip elements. As a result, Network-on-Chip (NoC) has emerged as a system architecture to overcome intra-communication issues. This paper presents a study of recent contributions on simulation tools for NoC. Furthermore, an overview of NoC is covered as well as a comparison between some NoC simulators to help facilitate research in on-chip communication.

Measurement of Temperature, Humidity and Strain Variation Using Bragg Sensor

Measurement and monitoring of temperature, humidity and strain variation are very requested in great fields and areas such as structural health monitoring (SHM) systems. Currently, the use of fiber Bragg grating sensors (FBGS) is very recommended in SHM systems due to the specifications of these sensors. In this paper, we present the theory of Bragg sensor, therefore we try to measure the efficient variation of strain, temperature and humidity (SV, ST, SH) using Bragg sensor. Thus, we can deduce the fundamental relation between these parameters and the wavelength of Bragg sensor.

Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective

The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.

Characteristics of Ozone Generated from Dielectric Barrier Discharge Plasma Actuators

Dielectric barrier discharge plasma actuators (DBD-PAs) have been developed for active flow control devices. However, it is necessary to reduce ozone produced by DBD toward practical applications using DBD-PAs. In this study, variations of ozone concentration, flow velocity, power consumption were investigated by changing exposed electrodes of DBD-PAs. Two exposed electrode prototypes were prepared: span-type with exposed electrode width of 0.1 mm, and normal-type with width of 5 mm. It was found that span-type shows lower power consumption and higher flow velocity than that of normal-type at Vp-p = 4.0-6.0 kV. Ozone concentration of span-type higher than normal-type at Vp-p = 4.0-8.0 kV. In addition, it was confirmed that catalyst located in downstream from the exposed electrode can reduce ozone concentration between 18 and 42% without affecting the induced flow.

Reusing Assessments Tests by Generating Arborescent Test Groups Using a Genetic Algorithm

Using Information and Communication Technologies (ICT) notions in education and three basic processes of education (teaching, learning and assessment) can bring benefits to the pupils and the professional development of teachers. In this matter, we refer to these notions as concepts taken from the informatics area and apply them to the domain of education. These notions refer to genetic algorithms and arborescent structures, used in the specific process of assessment or evaluation. This paper uses these kinds of notions to generate subtrees from a main tree of tests related between them by their degree of difficulty. These subtrees must contain the highest number of connections between the nodes and the lowest number of missing edges (which are subtrees of the main tree) and, in the particular case of the non-existence of a subtree with no missing edges, the subtrees which have the lowest (minimal) number of missing edges between the nodes, where a node is a test and an edge is a direct connection between two tests which differs by one degree of difficulty. The subtrees are represented as sequences. The tests are the same (a number coding a test represents that test in every sequence) and they are reused for each sequence of tests.

Effects of Sprint Training on Athletic Performance Related Physiological, Cardiovascular, and Neuromuscular Parameters

Practicing recurring resistance workout such as may cause changes in human muscle. These changes may be because combination if several factors determining physical fitness. Thus, it is important to identify these changes. Several studies were reviewed to investigate these changes. As a result, the changes included positive modifications in amplified citrate synthase (CS) maximal activity, increased capacity for pyruvate oxidation, improvement on molecular signaling on human performance, amplified resting muscle glycogen and whole GLUT4 protein content, better health outcomes such as enhancement in cardiorespiratory fitness. Sprint training also have numerous long long-term changes inhuman body such as better enzyme action, changes in muscle fiber and oxidative ability. This is important because SV is the critical factor influencing maximal cardiac output and therefore oxygen delivery and maximal aerobic power.

Suggestion for Malware Detection Agent Considering Network Environment

Smartphone users are increasing rapidly. Accordingly, many companies are running BYOD (Bring Your Own Device: Policies to bring private-smartphones to the company) policy to increase work efficiency. However, smartphones are always under the threat of malware, thus the company network that is connected smartphone is exposed to serious risks. Most smartphone malware detection techniques are to perform an independent detection (perform the detection of a single target application). In this paper, we analyzed a variety of intrusion detection techniques. Based on the results of analysis propose an agent using the network IDS.

A Study of the Costs and Benefits of Smart City Projects Including the Scenario of Public-Private Partnerships

A smart city project embraces benefits and costs which can be classified under direct and indirect categories. Externalities come into the picture, but they are often difficult to quantify. Despite this barrier, policy makers need to carry out cost-benefit analysis to justify the huge investments needed to make a city smart. The recent trend is towards the engagement of the private sector to utilize their resources and expertise, especially in the Information and Communication Technology (ICT) areas, where innovations blossom. This study focuses on the identification of costs (on a life cycle basis) and benefits associated with smart city project developments based on a comprehensive literature review and case studies, where public-private partnerships would warrant consideration, the related costs and benefits are highlighted. The findings will be useful for policy makers of cities.

Cluster Analysis of Customer Churn in Telecom Industry

The research examines the factors that affect customer churn (CC) in the Jordanian telecom industry. A total of 700 surveys were distributed. Cluster analysis revealed three main clusters. Results showed that CC and customer satisfaction (CS) were the key determinants in forming the three clusters. In two clusters, the center values of CC were high, indicating that the customers were loyal and SC was expensive and time- and energy-consuming. Still, the mobile service provider (MSP) should enhance its communication (COM), and value added services (VASs), as well as customer complaint management systems (CCMS). Finally, for the third cluster the center of the CC indicates a poor level of loyalty, which facilitates customers churn to another MSP. The results of this study provide valuable feedback for MSP decision makers regarding approaches to improving their performance and reducing CC.

Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products) for Higher Education

Higher education methods need to evolve because the new generations of students are learning in different ways. One way is by adopting emergent technologies, new learning methods and promoting the maker movement. As a result, Tecnologico de Monterrey is developing Open Innovation Laboratories as an immediate response to educational challenges of the world. This paper presents an Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products). The Open Innovation Laboratory is composed of a set of specific resources where students and teachers use them to provide solutions to current problems of priority sectors through the development of a new generation of products. This new generation of products considers the concepts Sensing, Smart, and Sustainable. The Open Innovation Laboratory has been implemented in different courses in the context of New Product Development (NPD) and Integrated Manufacturing Systems (IMS) at Tecnologico de Monterrey. The implementation consists of adapting this Open Innovation Laboratory within the course’s syllabus in combination with the implementation of specific methodologies for product development, learning methods (Active Learning and Blended Learning using Massive Open Online Courses MOOCs) and rapid product realization platforms. Using the concepts proposed it is possible to demonstrate that students can propose innovative and sustainable products, and demonstrate how the learning process could be improved using technological resources applied in the higher educational sector. Finally, examples of innovative S3 products developed at Tecnologico de Monterrey are presented.

A Sensitive Approach on Trace Analysis of Methylparaben in Wastewater and Cosmetic Products Using Molecularly Imprinted Polymer

Parabens are the antimicrobial molecules largely used in cosmetic products as a preservative agent. Among them, the methylparaben (MP) is the most frequently used ingredient in cosmetic preparations. Nevertheless, their potential dangers led to the development of sensible and reliable methods for their determination in environmental samples. Firstly, a sensitive and selective molecular imprinted polymer (MIP) based on screen-printed gold electrode (Au-SPE), assembled on a polymeric layer of carboxylated poly(vinyl-chloride) (PVC-COOH), was developed. After the template removal, the obtained material was able to rebind MP and discriminate it among other interfering species such as glucose, sucrose, and citric acid. The behavior of molecular imprinted sensor was characterized by Cyclic Voltammetry (CV), Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS) techniques. Then, the biosensor was found to have a linear detection range from 0.1 pg.mL-1 to 1 ng.mL-1 and a low limit of detection of 0.12 fg.mL-1 and 5.18 pg.mL-1 by DPV and EIS, respectively. For applications, this biosensor was employed to determine MP content in four wastewaters in Meknes city and two cosmetic products (shower gel and shampoo). The operational reproducibility and stability of this biosensor were also studied. Secondly, another MIP biosensor based on tungsten trioxide (WO3) functionalized by gold nanoparticles (Au-NPs) assembled on a polymeric layer of PVC-COOH was developed. The main goal was to increase the sensitivity of the biosensor. The developed MIP biosensor was successfully applied for the MP determination in wastewater samples and cosmetic products.

Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners

Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.

Analyzing the Usage of Social Media: A Study on Elderly in Malaysia

In the beginning of the prevalence of social media, it would be an obvious trend that the young adult age group has the highest population among the users on social media. However, apart from the age group of the users are becoming younger and younger, the elderly group has become a new force on social media, and this age group has increased rapidly. On top of that, the influence of social media towards the elderly is becoming more significant and it is even trending among them. This is because basic computer knowledge is not instilled into their life when they were young. This age group tends to be engrossed more than the young as this is something new for them, and they have the mindset that it is a new platform to approach things, and they tend to be more engrossed when they start getting in touch with the social media. Generally, most of the social media has been accepted and accessed by teenagers and young adult, but it is reasonable to believe that the social media is not really accepted among the elderly. Surprisingly, the elderlies are more addicted to the social media than the teenagers. Therefore, this study is to determine and understand the relationship between the elderly and social media, and how they employ social media in their lives. An online survey on 200 elderly aged 45-80 and an interview with a media expert are conducted to answer the main questions in the research paper. Uses and Gratification Approach is employed in theoretical framework. Finding revealed that majority of the respondents use social media to connect with family, friends, and for leisure purposes. The finding concluded that the elderly use social media differently according to their needs and wants which is in par with the highlight of Uses and Gratification theory. Considering the significantly large role social media plays in our culture and daily life today, the finding will shed some light on the effect of social media on the elderly or senior citizens who are usually relegated into a minority group in today’s age where the internet and social media are of great importance to our society and humanity in general. This may also serve to be useful in understanding behavioral patterns and preference in terms of social media usage among the elderly.

Application of Method of Symmetries at a Calculation and Planning of Circular Plate with Variable Thickness

A problem is formulated for the natural oscillations of a circular plate of linearly variable thickness on the basis of the symmetry method. The equations of natural frequencies and forms for a plate are obtained, providing that it is rigidly fixed along the inner contour. The first three eigenfrequencies are calculated, and the eigenmodes of the oscillations of the acoustic element are constructed. An algorithm for applying the symmetry method and the factorization method for solving problems in the theory of oscillations for plates of variable thickness is shown. The effectiveness of the approach is demonstrated on the basis of comparison of known results and those obtained in the article. It is shown that the results are more accurate and reliable.

C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network

One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.

RoboWeedSupport-Sub Millimeter Weed Image Acquisition in Cereal Crops with Speeds up till 50 Km/H

For the past three years, the Danish project, RoboWeedSupport, has sought to bridge the gap between the potential herbicide savings using a decision support system and the required weed inspections. In order to automate the weed inspections it is desired to generate a map of the weed species present within the field, to generate the map images must be captured with samples covering the field. This paper investigates the economical cost of performing this data collection based on a camera system mounted on a all-terain vehicle (ATV) able to drive and collect data at up to 50 km/h while still maintaining a image quality sufficient for identifying newly emerged grass weeds. The economical estimates are based on approximately 100 hectares recorded at three different locations in Denmark. With an average image density of 99 images per hectare the ATV had an capacity of 28 ha per hour, which is estimated to cost 6.6 EUR/ha. Alternatively relying on a boom solution for an existing tracktor it was estimated that a cost of 2.4 EUR/ha is obtainable under equal conditions.

Influence of the Line Parameters in Transmission Line Fault Location

In the paper, two fault location algorithms are presented for transmission lines which use the line parameters to estimate the distance to the fault. The first algorithm uses only the measurements from one end of the line and the positive and zero sequence parameters of the line, while the second one uses the measurements from both ends of the line and only the positive sequence parameters of the line. The algorithms were tested using a transmission grid transposed in MATLAB. In a first stage it was established a fault location base line, where the algorithms mentioned above estimate the fault locations using the exact line parameters. After that, the positive and zero sequence resistance and reactance of the line were calculated again for different ground resistivity values and then the fault locations were estimated again in order to compare the results with the base line results. The results show that the algorithm which uses the zero sequence impedance of the line is the most sensitive to the line parameters modifications. The other algorithm is less sensitive to the line parameters modification.

Characterization of Extreme Low-Resolution Digital Encoder for Control System with Sinusoidal Reference Signal

Low-resolution digital encoder (LRDE) is commonly adopted as a position sensor in low-cost and resource-constraint applications. Traditionally, a digital encoder is modeled as a quantizer without considering the initial position of the LRDE. However, it cannot be applied to extreme LRDE for which stroke of angular motion is only a few times of resolution of the encoder. Besides, the actual angular motion is substantially distorted by this extreme LRDE so that the encoder reading does not faithfully represent the actual angular motion. This paper presents a modeling method for extreme LRDE by taking into account the initial position of the LRDE. For a control system with sinusoidal reference signal and extreme LRDE, this paper analyzes the characteristics of angular motion. Specifically, two descriptors of sinusoidal angular motion are studied, which essentially sheds light on the actual angular motion from extreme LRDE.

Improving the Performances of the nMPRA Architecture by Implementing Specific Functions in Hardware

Minimizing the response time to asynchronous events in a real-time system is an important factor in increasing the speed of response and an interesting concept in designing equipment fast enough for the most demanding applications. The present article will present the results regarding the validation of the nMPRA (Multi Pipeline Register Architecture) architecture using the FPGA Virtex-7 circuit. The nMPRA concept is a hardware processor with the scheduler implemented at the processor level; this is done without affecting a possible bus communication, as is the case with the other CPU solutions. The implementation of static or dynamic scheduling operations in hardware and the improvement of handling interrupts and events by the real-time executive described in the present article represent a key solution for eliminating the overhead of the operating system functions. The nMPRA processor is capable of executing a preemptive scheduling, using various algorithms without a software scheduler. Therefore, we have also presented various scheduling methods and algorithms used in scheduling the real-time tasks.

The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry

A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.