Abstract: Planning the order picking lists for warehouses to achieve some operational performances is a significant challenge when the costs associated with logistics are relatively high, and it is especially important in e-commerce era. Nowadays, many order planning techniques employ supervised machine learning algorithms. However, to define features for supervised machine learning algorithms is not a simple task. Against this background, we consider whether unsupervised algorithms can enhance the planning of order-picking lists. A double zone picking approach, which is based on using clustering algorithms twice, is developed. A simplified example is given to demonstrate the merit of our approach.
Abstract: In times of rapid globalization, the significance of cultural and architectural heritage is rising, as it is a key element to define the identity of a place, a city, even a country. Its preservation, conservation, and revitalization are everyone’s responsibility, and the public is growing more aware of that fact. The citizens are looking for a way to actively participate in the decision-making in projects regarding heritage sites. Public involvement in the planning process is not a new phenomenon, especially in Western countries. However, countries, such as the former communist states of Eastern Europe, have been less studied. Based on established theories, this paper analyses the level of citizens’ inclusion in projects regarding heritage preservation, using the example of the Tobacco City in Plovdiv, Bulgaria. As this case is exemplary for Bulgaria, it illustrates the current condition of public participation country-wise. At the same time, considering the former communist states have had a similar socio-economic and political development in the past several decades, it is possible to apply the conclusions to most of these countries with only slight variations.
Abstract: Traditional federated architectures for data warehousing work well when corporations have existing regional data warehouses and there is a need to aggregate data at a global level. Schibsted Media Group has been maturing from a decentralised organisation into a more globalised one and needed to build both some of the regional data warehouses for some brands at the same time as the global one. In this paper, we present the architectural alternatives studied and why a custom federated approach was the notable recommendation to go further with the implementation. Although the data warehouses are logically federated, the implementation uses a single database system which presented many advantages like: cost reduction and improved data access to global users allowing consumers of the data to have a common data model for detailed analysis across different geographies and a flexible layer for local specific needs in the same place.
Abstract: Rack type warehouses are different from general buildings in the kinds, amount, and arrangement of stored goods, so the fire risk of rack type warehouses is different from those buildings. The fire pattern of rack type warehouses is different in combustion characteristic and storing condition of stored goods. The initial fire burning rate is different in the surface condition of materials, but the running time of fire is closely related with the kinds of stored materials and stored conditions. The stored goods of the warehouse are consisted of diverse combustibles, combustible liquid, and so on. Fire detection time may be delayed because the residents are less than office and commercial buildings. If fire detectors installed in rack type warehouses are inadaptable, the fire of the warehouse may be the great fire because of delaying of fire detection. In this paper, we studied what kinds of fire detectors are optimized in early detecting of rack type warehouse fire by real-scale fire tests. The fire detectors used in the tests are rate of rise type, fixed type, photo electric type, and aspirating type detectors. We considered optimum fire detecting method in rack type warehouses suggested by the response characteristic and comparative analysis of the fire detectors.
Abstract: Vehicle is one of the most influential and complex
product worldwide, which affects people’s life, state of the
environment and condition of the economy (all aspects of sustainable
development concept) during each stage of lifecycle. With the
increase of vehicles’ number, there is growing potential for
management of End of Life Vehicle (ELV), which is hazardous
waste. From one point of view, the ELV should be managed to ensure
risk elimination, but from another point, it should be treated as a
source of valuable materials and spare parts. In order to obtain
materials and spare parts, there are established recycling networks,
which are an example of sustainable policy realization at the national
level. The basic object in the polish recycling network is dismantling
facility. The output material streams in dismantling stations include
waste, which very often generate costs and spare parts, that have the
biggest potential for revenues creation. Both outputs are stored into
warehouses, according to the law. In accordance to the revenue
creation and sustainability potential, it has been placed a strong
emphasis on storage process. We present the concept of storage
method, which takes into account the specific of the dismantling
facility in order to support decision-making process with regard to the
principles of sustainable development. The method was developed on
the basis of case study of one of the greatest dismantling facility in
Poland.
Abstract: The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.
Abstract: This work proposed a multi-objective mathematical programming approach to select the appropriate supply network elements. The multi-item multi-objective production-distribution inventory model is formulated with possible constraints under fuzzy environment. The unit cost has taken under fuzzy environment. The inventory model and warehouse location model has combined to formulate the production-distribution inventory model. Warehouse location is important in supply chain network. Particularly, if a company maintains more selling stores it cannot maintain individual secondary warehouse near to each selling store. Hence, maintaining the optimum number of secondary warehouses is important. Hence, the combined mathematical model is formulated to reduce the total expenditure of the organization by arranging the network of minimum number of secondary warehouses. Numerical example has been taken to illustrate the proposed model.
Abstract: Istanbul-Karakoy Port, field of this study, has lost its
former significance in time due to the transformation of urban
functions. Today, activities for regeneration of this region continue in
two forms and scales. First of these activities is the "planned
transformation projects," which also includes “Galataport project”,
and the second one is "spontaneous transformation," which consists
of individual interventions. Galataport project that based on the idea
of arranging the area specifically for tourists was prepared in 2005
and became a topic of tremendous public debate. On the other hand,
the "spontaneous transformation" that is observed in Karakoy District
starts in 2004 with the foundation of “Istanbul Modern Museum”
which allowed the cultural integration of old naval warehouses of the
port to the daily life. Following this adaptive reuse intervention, the
district started to accommodate numerous art galleries, studios, caféworkshops
and design stores. In this context, this paper first examines
regeneration studies in obsolete port regions, analyzes the planned
and ongoing socio-spatial transformations in the specific case of
Karakoy and performs a critical review of the sustainability of the
proposals on how to reinstate the district in the active life of Istanbul.
Abstract: There have been a lot of efforts and researches undertaken in developing efficient tools for performing several tasks in data mining. Due to the massive amount of information embedded in huge data warehouses maintained in several domains, the extraction of meaningful pattern is no longer feasible. This issue turns to be more obligatory for developing several tools in data mining. Furthermore the major aspire of data mining software is to build a resourceful predictive or descriptive model for handling large amount of information more efficiently and user friendly. Data mining mainly contracts with excessive collection of data that inflicts huge rigorous computational constraints. These out coming challenges lead to the emergence of powerful data mining technologies. In this survey a diverse collection of data mining tools are exemplified and also contrasted with the salient features and performance behavior of each tool.
Abstract: The transportation problems are primarily concerned with the optimal way in which products produced at different plants (supply origins) are transported to a number of warehouses or customers (demand destinations). The objective in a transportation problem is to fully satisfy the destination requirements within the operating production capacity constraints at the minimum possible cost. The objective of this study is to determine ways of minimizing transportation cost in order to maximum profit. Data were sourced from the records of the Distribution Department of 7-Up Bottling Company Plc., Ilorin, Kwara State, Nigeria. The data were computed and analyzed using the three methods of solving transportation problem. The result shows that the three methods produced the same total transportation costs amounting to N1, 358, 019, implying that any of the method can be adopted by the company in transporting its final products to the wholesale dealers in order to minimize total production cost.
Abstract: In this study rack systems that are structural storage
units of warehouses have been analyzed as structural with Finite
Element Method (FEA). Each cell of discussed rack system storages
pallets which have from 800 kg to 1000 kg weights and
0.80x1.15x1.50 m dimensions. Under this load, total deformations
and equivalent stresses of structural elements and principal stresses,
tensile stresses and shear stresses of connection elements have been
analyzed. The results of analyses have been evaluated according to
resistance limits of structural and connection elements. Obtained
results have been presented as visual and magnitude.
Abstract: Automated storage and retrieval systems (AS/RS)
become frequently used systems in warehouses. There has been a
transition from human based forklift applications to fast and safe
AS/RS applications in firm-s warehouse systems. In this study, basic
components and automation systems of the AS/RS are examined.
Proposed system's automation components and their tasks in the
system control algorithm were stated. According to this control
algorithm the control system structure was obtained.
Abstract: In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.
Abstract: Since supply chains highly impact the financial
performance of companies, it is important to optimize and analyze
their Key Performance Indicators (KPI). The synergistic combination
of Particle Swarm Optimization (PSO) and Monte Carlo simulation is
applied to determine the optimal reorder point of warehouses in
supply chains. The goal of the optimization is the minimization of the
objective function calculated as the linear combination of holding and
order costs. The required values of service levels of the warehouses
represent non-linear constraints in the PSO. The results illustrate that
the developed stochastic simulator and optimization tool is flexible
enough to handle complex situations.
Abstract: Selecting the data modeling technique for an
information system is determined by the objective of the resultant
data model. Dimensional modeling is the preferred modeling
technique for data destined for data warehouses and data mining,
presenting data models that ease analysis and queries which are in
contrast with entity relationship modeling. The establishment of data
warehouses as components of information system landscapes in
many organizations has subsequently led to the development of
dimensional modeling. This has been significantly more developed
and reported for the commercial database management systems as
compared to the open sources thereby making it less affordable for
those in resource constrained settings. This paper presents
dimensional modeling of HIV patient information using open source
modeling tools. It aims to take advantage of the fact that the most
affected regions by the HIV virus are also heavily resource
constrained (sub-Saharan Africa) whereas having large quantities of
HIV data. Two HIV data source systems were studied to identify
appropriate dimensions and facts these were then modeled using two
open source dimensional modeling tools. Use of open source would
reduce the software costs for dimensional modeling and in turn make
data warehousing and data mining more feasible even for those in
resource constrained settings but with data available.
Abstract: Data Warehouses (DWs) are repositories which contain the unified history of an enterprise for decision support. The data must be Extracted from information sources, Transformed and integrated to be Loaded (ETL) into the DW, using ETL tools. These tools focus on data movement, where the models are only used as a means to this aim. Under a conceptual viewpoint, the authors want to innovate the ETL process in two ways: 1) to make clear compatibility between models in a declarative fashion, using correspondence assertions and 2) to identify the instances of different sources that represent the same entity in the real-world. This paper presents the overview of the proposed framework to model the ETL process, which is based on the use of a reference model and perspective schemata. This approach provides the designer with a better understanding of the semantic associated with the ETL process.
Abstract: The healthcare environment is generally perceived as
being information rich yet knowledge poor. However, there is a lack
of effective analysis tools to discover hidden relationships and trends
in data. In fact, valuable knowledge can be discovered from
application of data mining techniques in healthcare system. In this
study, a proficient methodology for the extraction of significant
patterns from the Coronary Heart Disease warehouses for heart
attack prediction, which unfortunately continues to be a leading cause
of mortality in the whole world, has been presented. For this purpose,
we propose to enumerate dynamically the optimal subsets of the
reduced features of high interest by using rough sets technique
associated to dynamic programming. Therefore, we propose to
validate the classification using Random Forest (RF) decision tree to
identify the risky heart disease cases. This work is based on a large
amount of data collected from several clinical institutions based on
the medical profile of patient. Moreover, the experts- knowledge in
this field has been taken into consideration in order to define the
disease, its risk factors, and to establish significant knowledge
relationships among the medical factors. A computer-aided system is
developed for this purpose based on a population of 525 adults. The
performance of the proposed model is analyzed and evaluated based
on set of benchmark techniques applied in this classification problem.
Abstract: A Data Warehouses is a repository of information
integrated from source data. Information stored in data warehouse is
the form of materialized in order to provide the better performance
for answering the queries. Deciding which appropriated views to be
materialized is one of important problem. In order to achieve this
requirement, the constructing search space close to optimal is a
necessary task. It will provide effective result for selecting view to be
materialized. In this paper we have proposed an approach to reoptimize
Multiple View Processing Plan (MVPP) by using global
common subexpressions. The merged queries which have query
processing cost not close to optimal would be rewritten. The
experiment shows that our approach can help to improve the total
query processing cost of MVPP and sum of query processing cost
and materialized view maintenance cost is reduced as well after views
are selected to be materialized.
Abstract: The increasing interest on processing data created by
sensor networks has evolved into approaches to implement sensor
networks as databases. The aggregation operator, which calculates a
value from a large group of data such as computing averages or sums,
etc. is an essential function that needs to be provided when
implementing such sensor network databases. This work proposes to
add the DURING clause into TinySQL to calculate values during a
specific long period and suggests a way to implement the aggregation
service in sensor networks by applying materialized view and
incremental view maintenance techniques that is used in data
warehouses. In sensor networks, data values are passed from child
nodes to parent nodes and an aggregation value is computed at the root
node. As such root nodes need to be memory efficient and low
powered, it becomes a problem to recompute aggregate values from all
past and current data. Therefore, applying incremental view
maintenance techniques can reduce the memory consumption and
support fast computation of aggregate values.