Evaluation of Residual Stresses in Human Face as a Function of Growth

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.

Interplay of Power Management at Core and Server Level

While the feature sizes of recent Complementary Metal Oxid Semiconductor (CMOS) devices decrease the influence of static power prevails their energy consumption. Thus, power savings that benefit from Dynamic Frequency and Voltage Scaling (DVFS) are diminishing and temporal shutdown of cores or other microchip components become more worthwhile. A consequence of powering off unused parts of a chip is that the relative difference between idle and fully loaded power consumption is increased. That means, future chips and whole server systems gain more power saving potential through power-aware load balancing, whereas in former times this power saving approach had only limited effect, and thus, was not widely adopted. While powering off complete servers was used to save energy, it will be superfluous in many cases when cores can be powered down. An important advantage that comes with that is a largely reduced time to respond to increased computational demand. We include the above developments in a server power model and quantify the advantage. Our conclusion is that strategies from datacenters when to power off server systems might be used in the future on core level, while load balancing mechanisms previously used at core level might be used in the future at server level.

Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe

The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.

Leveraging Quality Metrics in Voting Model Based Thread Retrieval

Seeking and sharing knowledge on online forums have made them popular in recent years. Although online forums are valuable sources of information, due to variety of sources of messages, retrieving reliable threads with high quality content is an issue. Majority of the existing information retrieval systems ignore the quality of retrieved documents, particularly, in the field of thread retrieval. In this research, we present an approach that employs various quality features in order to investigate the quality of retrieved threads. Different aspects of content quality, including completeness, comprehensiveness, and politeness, are assessed using these features, which lead to finding not only textual, but also conceptual relevant threads for a user query within a forum. To analyse the influence of the features, we used an adopted version of voting model thread search as a retrieval system. We equipped it with each feature solely and also various combinations of features in turn during multiple runs. The results show that incorporating the quality features enhances the effectiveness of the utilised retrieval system significantly.

Evaluation of Cognitive Benefits among Differently Abled Subjects with Video Game as Intervention

In this study, the potential benefits of playing action video game among congenitally deaf and dumb subjects is reported in terms of EEG ratio indices. The frontal and occipital lobes are associated with development of motor skills, cognition, and visual information processing and color recognition. The sixteen hours of First-Person shooter action video game play resulted in the increase of the ratios β/(α+θ) and β/θ in frontal and occipital lobes. This can be attributed to the enhancement of certain aspect of cognition among deaf and dumb subjects.

The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron

The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.

Patient-Specific Modeling Algorithm for Medical Data Based on AUC

Patient-specific models are instance-based learning algorithms that take advantage of the particular features of the patient case at hand to predict an outcome. We introduce two patient-specific algorithms based on decision tree paradigm that use AUC as a metric to select an attribute. We apply the patient specific algorithms to predict outcomes in several datasets, including medical datasets. Compared to the patient-specific decision path (PSDP) entropy-based and CART methods, the AUC-based patient-specific decision path models performed equivalently on area under the ROC curve (AUC). Our results provide support for patient-specific methods being a promising approach for making clinical predictions.

Day/Night Detector for Vehicle Tracking in Traffic Monitoring Systems

Recently, traffic monitoring has attracted the attention of computer vision researchers. Many algorithms have been developed to detect and track moving vehicles. In fact, vehicle tracking in daytime and in nighttime cannot be approached with the same techniques, due to the extreme different illumination conditions. Consequently, traffic-monitoring systems are in need of having a component to differentiate between daytime and nighttime scenes. In this paper, a HSV-based day/night detector is proposed for traffic monitoring scenes. The detector employs the hue-histogram and the value-histogram on the top half of the image frame. Experimental results show that the extraction of the brightness features along with the color features within the top region of the image is effective for classifying traffic scenes. In addition, the detector achieves high precision and recall rates along with it is feasible for real time applications.

Questions Categorization in E-Learning Environment Using Data Mining Technique

Nowadays, education cannot be imagined without digital technologies. It broadens the horizons of teaching learning processes. Several universities are offering online courses. For evaluation purpose, e-examination systems are being widely adopted in academic environments. Multiple-choice tests are extremely popular. Moving away from traditional examinations to e-examination, Moodle as Learning Management Systems (LMS) is being used. Moodle logs every click that students make for attempting and navigational purposes in e-examination. Data mining has been applied in various domains including retail sales, bioinformatics. In recent years, there has been increasing interest in the use of data mining in e-learning environment. It has been applied to discover, extract, and evaluate parameters related to student’s learning performance. The combination of data mining and e-learning is still in its babyhood. Log data generated by the students during online examination can be used to discover knowledge with the help of data mining techniques. In web based applications, number of right and wrong answers of the test result is not sufficient to assess and evaluate the student’s performance. So, assessment techniques must be intelligent enough. If student cannot answer the question asked by the instructor then some easier question can be asked. Otherwise, more difficult question can be post on similar topic. To do so, it is necessary to identify difficulty level of the questions. Proposed work concentrate on the same issue. Data mining techniques in specific clustering is used in this work. This method decide difficulty levels of the question and categories them as tough, easy or moderate and later this will be served to the desire students based on their performance. Proposed experiment categories the question set and also group the students based on their performance in examination. This will help the instructor to guide the students more specifically. In short mined knowledge helps to support, guide, facilitate and enhance learning as a whole.

Improved Network Construction Methods Based on Virtual Rails for Mobile Sensor Network

Although Mobile Wireless Sensor Networks (MWSNs), which consist of mobile sensor nodes (MSNs), can cover a wide range of observation region by using a small number of sensor nodes, they need to construct a network to collect the sensing data on the base station by moving the MSNs. As an effective method, the network construction method based on Virtual Rails (VRs), which is referred to as VR method, has been proposed. In this paper, we propose two types of effective techniques for the VR method. They can prolong the operation time of the network, which is limited by the battery capabilities of MSNs and the energy consumption of MSNs. The first technique, an effective arrangement of VRs, almost equalizes the number of MSNs belonging to each VR. The second technique, an adaptive movement method of MSNs, takes into account the residual energy of battery. In the simulation, we demonstrate that each technique can improve the network lifetime and the combination of both techniques is the most effective.

The Application of Data Mining Technology in Building Energy Consumption Data Analysis

Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.

Automatic Checkpoint System Using Face and Card Information

In the deep south of Thailand, checkpoints for people verification are necessary for the security management of risk zones, such as official buildings in the conflict area. In this paper, we propose an automatic checkpoint system that verifies persons using information from ID cards and facial features. The methods for a person’s information abstraction and verification are introduced based on useful information such as ID number and name, extracted from official cards, and facial images from videos. The proposed system shows promising results and has a real impact on the local society.

Online Topic Model for Broadcasting Contents Using Semantic Correlation Information

This paper proposes a method of learning topics for broadcasting contents. There are two kinds of texts related to broadcasting contents. One is a broadcasting script, which is a series of texts including directions and dialogues. The other is blogposts, which possesses relatively abstracted contents, stories, and diverse information of broadcasting contents. Although two texts range over similar broadcasting contents, words in blogposts and broadcasting script are different. When unseen words appear, it needs a method to reflect to existing topic. In this paper, we introduce a semantic vocabulary expansion method to reflect unseen words. We expand topics of the broadcasting script by incorporating the words in blogposts. Each word in blogposts is added to the most semantically correlated topics. We use word2vec to get the semantic correlation between words in blogposts and topics of scripts. The vocabularies of topics are updated and then posterior inference is performed to rearrange the topics. In experiments, we verified that the proposed method can discover more salient topics for broadcasting contents.

Comparison of Frequency-Domain Contention Schemes in Wireless LANs

In IEEE 802.11 networks, it is well known that the traditional time-domain contention often leads to low channel utilization. The first frequency-domain contention scheme, the time to frequency (T2F), has recently been proposed to improve the channel utilization and has attracted a great deal of attention. In this paper, we present the latest research progress on the weighed frequency-domain contention. We compare the basic ideas, work principles of these related schemes and point out their differences. This paper is very useful for further study on frequency-domain contention.

A Framework for Review Spam Detection Research

With the increasing number of people reviewing products online in recent years, opinion sharing websites has become the most important source of customers’ opinions. Unfortunately, spammers generate and post fake reviews in order to promote or demote brands and mislead potential customers. These are notably destructive not only for potential customers, but also for business holders and manufacturers. However, research in this area is not adequate, and many critical problems related to spam detection have not been solved to date. To provide green researchers in the domain with a great aid, in this paper, we have attempted to create a highquality framework to make a clear vision on review spam-detection methods. In addition, this report contains a comprehensive collection of detection metrics used in proposed spam-detection approaches. These metrics are extremely applicable for developing novel detection methods.

Improving Topic Quality of Scripts by Using Scene Similarity Based Word Co-Occurrence

Scripts are one of the basic text resources to understand broadcasting contents. Topic modeling is the method to get the summary of the broadcasting contents from its scripts. Generally, scripts represent contents descriptively with directions and speeches, and provide scene segments that can be seen as semantic units. Therefore, a script can be topic modeled by treating a scene segment as a document. Because scene segments consist of speeches mainly, however, relatively small co-occurrences among words in the scene segments are observed. This causes inevitably the bad quality of topics by statistical learning method. To tackle this problem, we propose a method to improve topic quality with additional word co-occurrence information obtained using scene similarities. The main idea of improving topic quality is that the information that two or more texts are topically related can be useful to learn high quality of topics. In addition, more accurate topical representations lead to get information more accurate whether two texts are related or not. In this paper, we regard two scene segments are related if their topical similarity is high enough. We also consider that words are co-occurred if they are in topically related scene segments together. By iteratively inferring topics and determining semantically neighborhood scene segments, we draw a topic space represents broadcasting contents well. In the experiments, we showed the proposed method generates a higher quality of topics from Korean drama scripts than the baselines.

APPLE: Providing Absolute and Proportional Throughput Guarantees in Wireless LANs

This paper proposes an APPLE scheme that aims at providing absolute and proportional throughput guarantees, and maximizing system throughput simultaneously for wireless LANs with homogeneous and heterogenous traffic. We formulate our objectives as an optimization problem, present its exact and approximate solutions, and prove the existence and uniqueness of the approximate solution. Simulations validate that APPLE scheme is accurate, and the approximate solution can well achieve the desired objectives already.

A Framework for Evaluating the QoS and Cost of Web Services Based on Its Functional Performance

In this corporate world, the technology of Web services has grown rapidly and its significance for the development of web based applications gradually rises over time. The success of Business to Business integration rely on finding novel partners and their services in a global business environment. However, the selection of the most suitable Web service from the list of services with the identical functionality is more vital. The satisfaction level of the customer and the provider’s reputation of the Web service are primarily depending on the range it reaches the customer’s requirements. In most cases, the customer of the Web service feels that he is spending for the service which is undelivered. This is because the customer always thinks that the real functionality of the web service is not reached. This will lead to change of the service frequently. In this paper, a framework is proposed to evaluate the Quality of Service (QoS) and its cost that makes the optimal correlation between each other. In addition, this research work proposes some management decision against the functional deviancy of the web service that is guaranteed at time of selection.

A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is widely used for LV segmentation, but it suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is improved to achieve a fast and efficient LV segmentation. First, a robust and efficient detection based on Hough forest localizes cardiac feature points. Such feature points are used to predict the initial fitting of the LV shape model. Second, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. With the robust initialization, ASM is able to achieve more accurate segmentation. The performance of the proposed method is evaluated on a dataset of 810 cardiac ultrasound images that are mostly abnormal shapes. This proposed method is compared with several combinations of ASM and existing initialization methods. Our experiment results demonstrate that accuracy of the proposed method for feature point detection for initialization was 40% higher than the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops and thus speeds up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Optimal Image Representation for Linear Canonical Transform Multiplexing

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.