Abstract: Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.
Abstract: The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.
Abstract: Automatic program generation saves time, human resources, and allows receiving syntactically clear and logically correct modules. The 4-th generation programming languages are related to drawing the data and the processes of the subject area, as well as, to obtain a frame of the respective information system. The application can be separated in interface and business logic. That means, for an interactive generation of the needed system to be used an already existing toolkit or to be created a new one.
Abstract: Recent progress in the next generation of automobile
technology is geared towards incorporating information technology
into cars. Collectively called smart cars are bringing intelligence to
cars that provides comfort, convenience and safety. A branch of smart
cars is connected-car system. The key concept in connected-cars is the
sharing of driving information among cars through decentralized
manner enabling collective intelligence. This paper proposes a
foundation of the information model that is necessary to define the
driving information for smart-cars. Road conditions are modeled
through a unique data structure that unambiguously represent the time
variant traffics in the streets. Additionally, the modeled data structure
is exemplified in a navigational scenario and usage using UML.
Optimal driving route searching is also discussed using the proposed
data structure in a dynamically changing road conditions.
Abstract: The network designing and data modeling developments which are the two significant research tasks in direction to tolerate power control of Microgrid concluded using IEC 61850 data models and facilities. The current casing areas of IEC 61580 include infrastructures in substation automation systems, among substations and to DERs. So, for LV microgrid power control, previously using the IEC 61850 amenities to control the smart electrical devices, we have to model those devices as IEC 61850 data models and design a network topology to maintenance all-in-one communiqué amid those devices. In adding, though IEC 61850 assists modeling a portion by open-handed several object models for common functions similar measurement, metering, monitoring…etc., there are motionless certain missing smithereens for building a multiplicity of functions for household appliances like tuning the temperature of an electric heater or refrigerator.
Abstract: Trends in business intelligence, e-commerce and
remote access make it necessary and practical to store data in
different ways on multiple systems with different operating systems.
As business evolve and grow, they require efficient computerized
solution to perform data update and to access data from diverse
enterprise business applications. The objective of this paper is to
demonstrate the capability of DTS [1] as a database solution for
automatic data transfer and update in solving business problem. This
DTS package is developed for the sales of variety of plants and
eventually expanded into commercial supply and landscaping
business. Dimension data modeling is used in DTS package to
extract, transform and load data from heterogeneous database
systems such as MySQL, Microsoft Access and Oracle that
consolidates into a Data Mart residing in SQL Server. Hence, the
data transfer from various databases is scheduled to run automatically
every quarter of the year to review the efficient sales analysis.
Therefore, DTS is absolutely an attractive solution for automatic data
transfer and update which meeting today-s business needs.
Abstract: A Matlab based software for logistic regression is developed to enhance the process of teaching quantitative topics and assist researchers with analyzing wide area of applications where categorical data is involved. The software offers an option of performing stepwise logistic regression to select the most significant predictors. The software includes a feature to detect influential observations in data, and investigates the effect of dropping or misclassifying an observation on a predictor variable. The input data may consist either as a set of individual responses (yes/no) with the predictor variables or as grouped records summarizing various categories for each unique set of predictor variables' values. Graphical displays are used to output various statistical results and to assess the goodness of fit of the logistic regression model. The software recognizes possible convergence constraints when present in data, and the user is notified accordingly.
Abstract: The emerging Semantic Web has been attracted many
researchers and developers. New applications have been developed on top of Semantic Web and many supporting tools introduced to improve its software development process. Metadata modeling is one of development process where supporting tools exists. The existing
tools are lack of readability and easiness for a domain knowledge expert to graphically models a problem in semantic model. In this paper, a metadata modeling tool called RDFGraph is proposed. This
tool is meant to solve those problems. RDFGraph is also designed to work with modern database management systems that support RDF and to improve the performance of the query execution process. The
testing result shows that the rules used in RDFGraph follows the W3C standard and the graphical model produced in this tool is properly translated and correct.
Abstract: Selecting the data modeling technique for an
information system is determined by the objective of the resultant
data model. Dimensional modeling is the preferred modeling
technique for data destined for data warehouses and data mining,
presenting data models that ease analysis and queries which are in
contrast with entity relationship modeling. The establishment of data
warehouses as components of information system landscapes in
many organizations has subsequently led to the development of
dimensional modeling. This has been significantly more developed
and reported for the commercial database management systems as
compared to the open sources thereby making it less affordable for
those in resource constrained settings. This paper presents
dimensional modeling of HIV patient information using open source
modeling tools. It aims to take advantage of the fact that the most
affected regions by the HIV virus are also heavily resource
constrained (sub-Saharan Africa) whereas having large quantities of
HIV data. Two HIV data source systems were studied to identify
appropriate dimensions and facts these were then modeled using two
open source dimensional modeling tools. Use of open source would
reduce the software costs for dimensional modeling and in turn make
data warehousing and data mining more feasible even for those in
resource constrained settings but with data available.