Meta-Search in Human Resource Management

In the area of Human Resource Management, the trend is towards online exchange of information about human resources. For example, online applications for employment become standard and job offerings are posted in many job portals. However, there are too many job portals to monitor all of them if someone is interested in a new job. We developed a prototype for integrating information of different job portals into one meta-search engine. First, existing job portals were investigated and XML schema documents were derived automated from these portals. Second, translation rules for transforming each schema to a central HR-XML-conform schema were determined. The HR-XML-schema is used to build a form for searching jobs. The data supplied by a user in this form is now translated into queries for the different job portals. Each result obtained by a job portal is sent to the meta-search engine that ranks the result of all received job offers according to user's preferences.

Constructing a Simple Polygonalizations

We consider the methods of construction simple polygons for a set S of n points and applying them for searching the minimal area polygon. In this paper we propose the approximate algorithm, which generates the simple polygonalizations of a fixed set of points and finds the minimal area polygon, in O (n3) time and using O(n2) memory.

Effective Keyword and Similarity Thresholds for the Discovery of Themes from the User Web Access Patterns

Clustering techniques have been used by many intelligent software agents to group similar access patterns of the Web users into high level themes which express users intentions and interests. However, such techniques have been mostly focusing on one salient feature of the Web document visited by the user, namely the extracted keywords. The major aim of these techniques is to come up with an optimal threshold for the number of keywords needed to produce more focused themes. In this paper we focus on both keyword and similarity thresholds to generate themes with concentrated themes, and hence build a more sound model of the user behavior. The purpose of this paper is two fold: use distance based clustering methods to recognize overall themes from the Proxy log file, and suggest an efficient cut off levels for the keyword and similarity thresholds which tend to produce more optimal clusters with better focus and efficient size.

Indexing and Searching of Image Data in Multimedia Databases Using Axial Projection

This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.

The Management in Large Emergency Situations – A Best Practise Case Study based on GIS for Management of Evacuation

In most of the cases, natural disasters lead to the necessity of evacuating people. The quality of evacuation management is dramatically improved by the use of information provided by decision support systems, which become indispensable in case of large scale evacuation operations. This paper presents a best practice case study. In November 2007, officers from the Emergency Situations Inspectorate “Crisana" of Bihor County from Romania participated to a cross-border evacuation exercise, when 700 people have been evacuated from Netherlands to Belgium. One of the main objectives of the exercise was the test of four different decision support systems. Afterwards, based on that experience, software system called TEVAC (Trans Border Evacuation) has been developed “in house" by the experts of this institution. This original software system was successfully tested in September 2008, during the deployment of the international exercise EU-HUROMEX 2008, the scenario involving real evacuation of 200 persons from Hungary to Romania. Based on the lessons learned and results, starting from April 2009, the TEVAC software is used by all Emergency Situations Inspectorates all over Romania.

3D Star Skeleton for Fast Human Posture Representation

In this paper, we propose an improved 3D star skeleton technique, which is a suitable skeletonization for human posture representation and reflects the 3D information of human posture. Moreover, the proposed technique is simple and then can be performed in real-time. The existing skeleton construction techniques, such as distance transformation, Voronoi diagram, and thinning, focus on the precision of skeleton information. Therefore, those techniques are not applicable to real-time posture recognition since they are computationally expensive and highly susceptible to noise of boundary. Although a 2D star skeleton was proposed to complement these problems, it also has some limitations to describe the 3D information of the posture. To represent human posture effectively, the constructed skeleton should consider the 3D information of posture. The proposed 3D star skeleton contains 3D data of human, and focuses on human action and posture recognition. Our 3D star skeleton uses the 8 projection maps which have 2D silhouette information and depth data of human surface. And the extremal points can be extracted as the features of 3D star skeleton, without searching whole boundary of object. Therefore, on execution time, our 3D star skeleton is faster than the “greedy" 3D star skeleton using the whole boundary points on the surface. Moreover, our method can offer more accurate skeleton of posture than the existing star skeleton since the 3D data for the object is concerned. Additionally, we make a codebook, a collection of representative 3D star skeletons about 7 postures, to recognize what posture of constructed skeleton is.

New Hybrid Algorithm for Task Scheduling in Grid Computing to Decrease missed Task

The purpose of Grid computing is to utilize computational power of idle resources which are distributed in different areas. Given the grid dynamism and its decentralize resources, there is a need for an efficient scheduler for scheduling applications. Since task scheduling includes in the NP-hard problems various researches have focused on invented algorithms especially the genetic ones. But since genetic is an inherent algorithm which searches the problem space globally and does not have the efficiency required for local searching, therefore, its combination with local searching algorithms can compensate for this shortcomings. The aim of this paper is to combine the genetic algorithm and GELS (GAGELS) as a method to solve scheduling problem by which simultaneously pay attention to two factors of time and number of missed tasks. Results show that the proposed algorithm can decrease makespan while minimizing the number of missed tasks compared with the traditional methods.

Grouping and Indexing Color Features for Efficient Image Retrieval

Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.

Bio-inspired Audio Content-Based Retrieval Framework (B-ACRF)

Content-based music retrieval generally involves analyzing, searching and retrieving music based on low or high level features of a song which normally used to represent artists, songs or music genre. Identifying them would normally involve feature extraction and classification tasks. Theoretically the greater features analyzed, the better the classification accuracy can be achieved but with longer execution time. Technique to select significant features is important as it will reduce dimensions of feature used in classification and contributes to the accuracy. Artificial Immune System (AIS) approach will be investigated and applied in the classification task. Bio-inspired audio content-based retrieval framework (B-ACRF) is proposed at the end of this paper where it embraces issues that need further consideration in music retrieval performances.

The CEO Mission II, Rescue Robot with Multi-Joint Mechanical Arm

This paper presents design features of a rescue robot, named CEO Mission II. Its body is designed to be the track wheel type with double front flippers for climbing over the collapse and the rough terrain. With 125 cm. long, 5-joint mechanical arm installed on the robot body, it is deployed not only for surveillance from the top view but also easier and faster access to the victims to get their vital signs. Two cameras and sensors for searching vital signs are set up at the tip of the multi-joint mechanical arm. The third camera is at the back of the robot for driving control. Hardware and software of the system, which controls and monitors the rescue robot, are explained. The control system is used for controlling the robot locomotion, the 5-joint mechanical arm, and for turning on/off devices. The monitoring system gathers all information from 7 distance sensors, IR temperature sensors, 3 CCD cameras, voice sensor, robot wheels encoders, yawn/pitch/roll angle sensors, laser range finder and 8 spare A/D inputs. All sensors and controlling data are communicated with a remote control station via IEEE 802.11b Wi-Fi. The audio and video data are compressed and sent via another IEEE 802.11g Wi-Fi transmitter for getting real-time response. At remote control station site, the robot locomotion and the mechanical arm are controlled by joystick. Moreover, the user-friendly GUI control program is developed based on the clicking and dragging method to easily control the movement of the arm. Robot traveling map is plotted from computing the information of wheel encoders and the yawn/pitch data. 2D Obstacle map is plotted from data of the laser range finder. The concept and design of this robot can be adapted to suit many other applications. As the Best Technique awardee from Thailand Rescue Robot Championship 2006, all testing results are satisfied.

Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images

Panoramic view generation has always offered novel and distinct challenges in the field of image processing. Panoramic view generation is nothing but construction of bigger view mosaic image from set of partial images of the desired view. The paper presents a solution to one of the problems of image seascape formation where some of the partial images are color and others are grayscale. The simplest solution could be to convert all image parts into grayscale images and fusing them to get grayscale image panorama. But in the multihued world, obtaining the colored seascape will always be preferred. This could be achieved by picking colors from the color parts and squirting them in grayscale parts of the seascape. So firstly the grayscale image parts should be colored with help of color image parts and then these parts should be fused to construct the seascape image. The problem of coloring grayscale images has no exact solution. In the proposed technique of panoramic view generation, the job of transferring color traits from reference color image to grayscale image is done by palette based method. In this technique, the color palette is prepared using pixel windows of some degrees taken from color image parts. Then the grayscale image part is divided into pixel windows with same degrees. For every window of grayscale image part the palette is searched and equivalent color values are found, which could be used to color grayscale window. For palette preparation we have used RGB color space and Kekre-s LUV color space. Kekre-s LUV color space gives better quality of coloring. The searching time through color palette is improved over the exhaustive search using Kekre-s fast search technique. After coloring the grayscale image pieces the next job is fusion of all these pieces to obtain panoramic view. For similarity estimation between partial images correlation coefficient is used.

Academic Digital Library's Evaluation Criteria: User-Centered Approach

Academic digital libraries emerged as a result of advances in computing and information systems technologies, and had been introduced in universities and to public. As results, moving in parallel with current technology in learning and researching environment indeed offers myriad of advantages especially to students and academicians, as well as researchers. This is due to dramatic changes in learning environment through the use of digital library system which giving spectacular impact on these societies- way of performing their study/research. This paper presents a survey of current criteria for evaluating academic digital libraries- performance. The goal is to discuss criteria being applied so far for academic digital libraries evaluation in the context of user-centered design. Although this paper does not comprehensively take into account all previous researches in evaluating academic digital libraries but at least it can be a guide in understanding the evaluation criteria being widely applied.

IT Management: How IT Managers Gain IT knowledge

It is not a secret that, IT management has become more and more and integrated part of almost all organizations. IT managers posses an enormous amount of knowledge within both organizational knowledge and general IT knowledge. This article investigates how IT managers keep themselves updated on IT knowledge in general and looks into how much time IT managers spend on weekly basis searching the net for new or problem solving IT knowledge. The theory used in this paper is used to investigate the current role of IT managers and what issues they are facing. Furthermore a research is conducted where 7 IT managers in medium sized and large Danish companies are interviewed to add further focus on the role of the IT manager and to focus on how they keep themselves updated. Beside finding substantial need for more research, IT managers – generalists or specialists – only have limited knowledge resources at hand in updating their own knowledge – leaving much initiative to vendors.

Diagnosing the Cause and its Timing of Changes in Multivariate Process Mean Vector from Quality Control Charts using Artificial Neural Network

Quality control charts are very effective in detecting out of control signals but when a control chart signals an out of control condition of the process mean, searching for a special cause in the vicinity of the signal time would not always lead to prompt identification of the source(s) of the out of control condition as the change point in the process parameter(s) is usually different from the signal time. It is very important to manufacturer to determine at what point and which parameters in the past caused the signal. Early warning of process change would expedite the search for the special causes and enhance quality at lower cost. In this paper the quality variables under investigation are assumed to follow a multivariate normal distribution with known means and variance-covariance matrix and the process means after one step change remain at the new level until the special cause is being identified and removed, also it is supposed that only one variable could be changed at the same time. This research applies artificial neural network (ANN) to identify the time the change occurred and the parameter which caused the change or shift. The performance of the approach was assessed through a computer simulation experiment. The results show that neural network performs effectively and equally well for the whole shift magnitude which has been considered.

Molecular Dynamics of Fatty Acid Interacting with Carbon Nanotube as Selective Device

In this paper we study a system composed by carbon nanotube (CNT) and bundle of carbon nanotube (BuCNT) interacting with a specific fatty acid as molecular probe. Full system is represented by open nanotube (or nanotubes) and the linoleic acid (LA) relaxing due the interaction with CNT and BuCNT. The LA has in his form an asymmetric shape with COOH termination provoking a close BuCNT interaction mainly by van der Waals force field. The simulations were performed by classical molecular dynamics with standard parameterizations. Our results show that these BuCNT and CNT are dynamically stable and it shows a preferential interaction position with LA resulting in three features: (i) when the LA is interacting with CNT and BuCNT (including both termination, CH2 or COOH), the LA is repelled; (ii) when the LA terminated with CH2 is closer to open extremity of BuCNT, the LA is also repelled by the interaction between them; and (iii) when the LA terminated with COOH is closer to open extremity of BuCNT, the LA is encapsulated by the BuCNT. These simulations are part of a more extensive work on searching efficient selective molecular devices and could be useful to reach this goal.

An Ontology for Knowledge Representation and Applications

Ontology is a terminology which is used in artificial intelligence with different meanings. Ontology researching has an important role in computer science and practical applications, especially distributed knowledge systems. In this paper we present an ontology which is called Computational Object Knowledge Base Ontology. It has been used in designing some knowledge base systems for solving problems such as the system that supports studying knowledge and solving analytic geometry problems, the program for studying and solving problems in Plane Geometry, the knowledge system in linear algebra.

An Efficient Heuristic for the Minimum Connected Dominating Set Problem on Ad Hoc Wireless Networks

Connected dominating set (CDS) problem in unit disk graph has signi£cant impact on an ef£cient design of routing protocols in wireless sensor networks, where the searching space for a route is reduced to nodes in the set. A set is dominating if all the nodes in the system are either in the set or neighbors of nodes in the set. In this paper, a simple and ef£cient heuristic method is proposed for £nding a minimum connected dominating set (MCDS) in ad hoc wireless networks based on the new parameter support of vertices. With this parameter the proposed heuristic approach effectively £nds the MCDS of a graph. Extensive computational experiments show that the proposed approach outperforms the recently proposed heuristics found in the literature for the MCD

Feature Selection for Breast Cancer Diagnosis: A Case-Based Wrapper Approach

This article addresses feature selection for breast cancer diagnosis. The present process contains a wrapper approach based on Genetic Algorithm (GA) and case-based reasoning (CBR). GA is used for searching the problem space to find all of the possible subsets of features and CBR is employed to estimate the evaluation result of each subset. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer (WDBC) dataset.

Global Security Using Human Face Understanding under Vision Ubiquitous Architecture System

Different methods containing biometric algorithms are presented for the representation of eigenfaces detection including face recognition, are identification and verification. Our theme of this research is to manage the critical processing stages (accuracy, speed, security and monitoring) of face activities with the flexibility of searching and edit the secure authorized database. In this paper we implement different techniques such as eigenfaces vector reduction by using texture and shape vector phenomenon for complexity removal, while density matching score with Face Boundary Fixation (FBF) extracted the most likelihood characteristics in this media processing contents. We examine the development and performance efficiency of the database by applying our creative algorithms in both recognition and detection phenomenon. Our results show the performance accuracy and security gain with better achievement than a number of previous approaches in all the above processes in an encouraging mode.

Data Extraction of XML Files using Searching and Indexing Techniques

XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.