Industry Openness, Human Capital and Wage Inequality: Evidence from Chinese Manufacturing Firms

This paper uses a primary data from 670 Chinese manufacturing firms, together with the newly introduced regressionbased inequality decomposition method, to study the effect of openness on wage inequality. We find that openness leads to a positive industry wage premium, but its contribution to firm-level wage inequality is relatively small, only 4.69%. The major contributor to wage inequality is human capital, which could explain 14.3% of wage inequality across sample firms.  

Antioxidant Capacity and Total Phenolic Content of Aqueous Acetone and Ethanol Extract of Edible Parts of Moringa oleifera and Sesbania grandiflora

Aqueous ethanol and aqueous acetone extracts of Moringa oleifera (outer pericarp of immature fruit and flower) and Sesbania grandiflora white variety (flower and leaf) were examined for radical scavenging capacities and antioxidant activities. Ethanol extract of S. grandiflora (flower and leaf) and acetone extract of M. oleifera (outer pericarp of immature fruit and flower) contained relatively higher levels of total dietary phenolics than the other extracts. The antioxidant potential of the extracts were assessed by employing different in vitro assays such as reducing power assay, DPPH˙, ABTS˙+ and ˙OH radical scavenging capacities, antihemolytic assay by hydrogen peroxide induced method and metal chelating ability. Though all the extracts exhibited dose dependent reducing power activity, acetone extract of all the samples were found to have more hydrogen donating ability in DPPH˙ (2.3% - 65.03%) and hydroxyl radical scavenging systems (21.6% - 77.4%) than the ethanol extracts. The potential of multiple antioxidant activity was evident as it possessed antihemolytic activity (43.2 % to 68.0 %) and metal ion chelating potency (45.16 - 104.26 mg EDTA/g sample). The result indicate that acetone extract of M. oleifera (OPIF and flower) and S. grandiflora (flower and leaf) endowed with polyphenols, could be utilized as natural antioxidants/nutraceuticals.

Design of Permanent Sensor Fault Tolerance Algorithms by Sliding Mode Observer for Smart Hybrid Powerpack

In the SHP, LVDT sensor is for detecting the length changes of the EHA output, and the thrust of the EHA is controlled by the pressure sensor. Sensor is possible to cause hardware fault by internal problem or external disturbance. The EHA of SHP is able to be uncontrollable due to control by feedback from uncertain information, on this paper; the sliding mode observer algorithm estimates the original sensor output information in permanent sensor fault. The proposed algorithm shows performance to recovery fault of disconnection and short circuit basically, also the algorithm detect various of sensor fault mode.

Modeling Aerosol Formation in an Electrically Heated Tobacco Product

Philip Morris International (PMI) is developing a range of novel tobacco products with the potential to reduce individual risk and population harm in comparison to smoking cigarettes. One of these products is the Tobacco Heating System 2.2 (THS 2.2), (named as the Electrically Heated Tobacco System (EHTS) in this paper), already commercialized in a number of countries (e.g., Japan, Italy, Switzerland, Russia, Portugal and Romania). During use, the patented EHTS heats a specifically designed tobacco product (Electrically Heated Tobacco Product (EHTP)) when inserted into a Holder (heating device). The EHTP contains tobacco material in the form of a porous plug that undergoes a controlled heating process to release chemical compounds into vapors, from which an aerosol is formed during cooling. The aim of this work was to investigate the aerosol formation characteristics for realistic operating conditions of the EHTS as well as for relevant gas mixture compositions measured in the EHTP aerosol consisting mostly of water, glycerol and nicotine, but also other compounds at much lower concentrations. The nucleation process taking place in the EHTP during use when operated in the Holder has therefore been modeled numerically using an extended Classical Nucleation Theory (CNT) for multicomponent gas mixtures. Results from the performed simulations demonstrate that aerosol droplets are formed only in the presence of an aerosol former being mainly glycerol. Minor compounds in the gas mixture were not able to reach a supersaturated state alone and therefore could not generate aerosol droplets from the multicomponent gas mixture at the operating conditions simulated. For the analytically characterized aerosol composition and estimated operating conditions of the EHTS and EHTP, glycerol was shown to be the main aerosol former triggering the nucleation process in the EHTP. This implies that according to the CNT, an aerosol former, such as glycerol needs to be present in the gas mixture for an aerosol to form under the tested operating conditions. To assess if these conclusions are sensitive to the initial amount of the minor compounds and to include and represent the total mass of the aerosol collected during the analytical aerosol characterization, simulations were carried out with initial masses of the minor compounds increased by as much as a factor of 500. Despite this extreme condition, no aerosol droplets were generated when glycerol, nicotine and water were treated as inert species and therefore not actively contributing to the nucleation process. This implies that according to the CNT, an aerosol cannot be generated without the help of an aerosol former, from the multicomponent gas mixtures at the compositions and operating conditions estimated for the EHTP, even if all minor compounds are released or generated in a single puff.

A Reference Framework Integrating Lean and Green Principles within Supply Chain Management

In the last decades, an increasing set of companies adopted lean philosophy to improve their productivity and efficiency promoting the so-called continuous improvement concept, reducing waste of time and cutting off no-value added activities. In parallel, increasing attention rises toward green practice and management through the spread of the green supply chain pattern, to minimise landfilled waste, drained wastewater and pollutant emissions. Starting from a review on contributions deepening lean and green principles applied to supply chain management, the most relevant drivers to measure the performance of industrial processes are pointed out. Specific attention is paid on the role of cost because it is of key importance and it crosses both lean and green principles. This analysis leads to figure out an original reference framework for integrating lean and green principles in designing and managing supply chains. The proposed framework supports the application, to the whole value chain or to parts of it, e.g. distribution network, assembly system, job-shop, storage system etc., of the lean-green integrated perspective. Evidences show that the combination of the lean and green practices lead to great results, higher than the sum of the performances from their separate application. Lean thinking has beneficial effects on green practices and, at the same time, methods allowing environmental savings generate positive effects on time reduction and process quality increase.

Simplified Analysis on Steel Frame Infill with FRP Composite Panel

In order to understand the seismic behavior of steel frame structure with infill FRP composite panel, simple models for simulation on the steel frame with the panel systems were developed in this study. To achieve the simple design method of the steel framed structure with the damping panel system, 2-D finite element analysis with the springs and dashpots models was conducted in ABAQUS. Under various applied spring stiffness and dashpot coefficient, the expected hysteretic energy responses of the steel frame with damping panel systems we investigated. Using the proposed simple design method which decides the stiffness and the damping, it is possible to decide the FRP and damping materials on a steel frame system.

Status Report of the GERDA Phase II Startup

The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.

Statistically Significant Differences of Carbon Dioxide and Carbon Monoxide Emission in Photocopying Process

Experimental results confirmed the temporal variation of carbon dioxide and carbon monoxide concentration during the working shift of the photocopying process in a small photocopying shop in Novi Sad, Serbia. The statistically significant differences of target gases were examined with two-way analysis of variance without replication followed by Scheffe's post hoc test. The existence of statistically significant differences was obtained for carbon monoxide emission which is pointed out with F-values (12.37 and 31.88) greater than Fcrit (6.94) in contrary to carbon dioxide emission (F-values of 1.23 and 3.12 were less than Fcrit).  Scheffe's post hoc test indicated that sampling point A (near the photocopier machine) and second time interval contribute the most on carbon monoxide emission.

The Integration Process of Non-EU Citizens in Luxembourg: From an Empirical Approach Toward a Theoretical Model

Integration of foreign communities has been a forefront issue in Luxembourg for some time now. The country’s continued progress depends largely on the successful integration of immigrants. The aim of our study was to analyze factors which intervene in the course of integration of Non-EU citizens through the discourse of Non-EU citizens residing in Luxembourg, who have signed the Welcome and Integration Contract (CAI). The two-year contract offers integration services to assist foreigners in getting settled in the country. Semi-structured focus group discussions with 50 volunteers were held in English, French, Spanish, Serbo-Croatian or Chinese. Participants were asked to talk about their integration experiences. Recorded then transcribed, the transcriptions were analyzed with the help of NVivo 10, a qualitative analysis software. A systematic and reiterative analysis of decomposing and reconstituting was realized through (1) the identification of predetermined categories (difficulties, challenges and integration needs) (2) initial coding – the grouping together of similar ideas (3) axial coding – the regrouping of items from the initial coding in new ways in order to create sub-categories and identify other core dimensions. Our results show that intervening factors include language acquisition, professional career and socio-cultural activities or events. Each of these factors constitutes different components whose weight shifts from person to person and from situation to situation. Connecting these three emergent factors are two elements essential to the success of the immigrant’s integration – the role of time and deliberate effort from the immigrants, the community, and the formal institutions charged with helping immigrants integrate. We propose a theoretical model where the factors described may be classified in terms of how they predispose, facilitate, and / or reinforce the process towards a successful integration. Measures currently in place propose one size fits all programs yet integrative measures which target the family unit and those customized to target groups based on their needs would work best.

A Hybrid P2P Storage Scheme Based on Erasure Coding and Replication

A peer-to-peer storage system has challenges like; peer availability, data protection, churn rate. To address these challenges different redundancy, replacement and repair schemes are used. This paper presents a hybrid scheme of redundancy using replication and erasure coding. We calculate and compare the storage, access, and maintenance costs of our proposed scheme with existing redundancy schemes. For realistic behaviour of peers a trace of live peer-to-peer system is used. The effect of different replication, and repair schemes are also shown. The proposed hybrid scheme performs better than existing double coding hybrid scheme in all metrics and have an improved maintenance cost than hierarchical codes.

Solution of Logistics Center Selection Problem Using the Axiomatic Design Method

Logistics centers represent areas that all national and international logistics and activities related to logistics can be implemented by the various businesses. Logistics centers have a key importance in joining the transport stream and the transport system operations. Therefore, it is important where these centers are positioned to be effective and efficient and to show the expected performance of the centers. In this study, the location selection problem to position the logistics center is discussed. Alternative centers are evaluated according certain criteria. The most appropriate center is identified using the axiomatic design method.

Designing Creative Events with Deconstructivism Approach

Deconstruction is an approach that is entirely incompatible with the traditional prevalent architecture. Considering the fact that this approach attempts to put architecture in sharp contrast with its opposite events and transpires with attending to the neglected and missing aspects of architecture and deconstructing its stable structures. It also recklessly proceeds beyond the existing frameworks and intends to create a different and more efficient prospect for space. The aim of deconstruction architecture is to satisfy both the prospective and retrospective visions as well as takes into account all tastes of the present in order to transcend time. Likewise, it ventures to fragment the facts and symbols of the past and extract new concepts from within their heart, which coincide with today’s circumstances. Since this approach is an attempt to surpass the limits of the prevalent architecture, it can be employed to design places in which creative events occur and imagination and ambition flourish. Thought-provoking artistic events can grow and mature in such places and be represented in the best way possible to all people. The concept of event proposed in the plan grows out of the interaction between space and creation. In addition to triggering surprise and high impressions, it is also considered as a bold journey into the suspended realms of the traditional conflicts in architecture such as architecture-landscape, interior-exterior, center-margin, product-process, and stability-instability. In this project, at first, through interpretive-historical research method and examining the inputs and data collection, recognition and organizing takes place. After evaluating the obtained data using deductive reasoning, the data is eventually interpreted. Given the fact that the research topic is in its infancy and there is not a similar case in Iran with limited number of corresponding instances across the world, the selected topic helps to shed lights on the unrevealed and neglected parts in architecture. Similarly, criticizing, investigating and comparing specific and highly prized cases in other countries with the project under study can serve as an introduction into this architecture style.

Triangular Geometric Feature for Offline Signature Verification

Handwritten signature is accepted widely as a biometric characteristic for personal authentication. The use of appropriate features plays an important role in determining accuracy of signature verification; therefore, this paper presents a feature based on the geometrical concept. To achieve the aim, triangle attributes are exploited to design a new feature since the triangle possesses orientation, angle and transformation that would improve accuracy. The proposed feature uses triangulation geometric set comprising of sides, angles and perimeter of a triangle which is derived from the center of gravity of a signature image. For classification purpose, Euclidean classifier along with Voting-based classifier is used to verify the tendency of forgery signature. This classification process is experimented using triangular geometric feature and selected global features. Based on an experiment that was validated using Grupo de Senales 960 (GPDS-960) signature database, the proposed triangular geometric feature achieves a lower Average Error Rates (AER) value with a percentage of 34% as compared to 43% of the selected global feature. As a conclusion, the proposed triangular geometric feature proves to be a more reliable feature for accurate signature verification.

Origins of Strict Liability for Abnormally Dangerous Activities in the United States, Rylands v. Fletcher and a General Clause of Strict Liability in the UK

The paper reveals the birth and evolution of the British precedent Rylands v. Fletcher that, once adopted on the other side of the Ocean (in United States), gave rise to a general clause of liability for abnormally dangerous activities recognized by the §20 of the American Restatements of the Law Third, Liability for Physical and Emotional Harm. The main goal of the paper was to analyze the development of the legal doctrine and of the case law posterior to the precedent together with the intent of the British judicature to leapfrog from the traditional rule contained in Rylands v. Fletcher to a general clause similar to that introduced in the United States and recently also on the European level. As it is well known, within the scope of tort law two different initiatives compete with the aim of harmonizing the European laws: European Group on Tort Law with its Principles of European Tort Law (hereinafter PETL) in which article 5:101 sets forth a general clause for strict liability for abnormally dangerous activities and Study Group on European Civil Code with its Common Frame of Reference (CFR) which promotes rather ad hoc model of listing out determined cases of strict liability. Very narrow application scope of the art. 5:101 PETL, restricted only to abnormally dangerous activities, stays in opposition to very broad spectrum of strict liability cases governed by the CFR. The former is a perfect example of a general clause that offers a minimum and basic standard, possibly acceptable also in those countries in which, like in the United Kingdom, this regime of liability is completely marginalized.

High-Rises and Urban Design: The Reasons for Unsuccessful Placemaking with Residential High-Rises in England

High-rises and placemaking is an understudied combination which receives more and more interest with the proliferation of this typology in many British cities. The reason for studying three major cities in England: London, Birmingham and Manchester, is to learn from the latest advances in urban design in well-developed and prominent urban environment. The analysis of several high-rise sites reveals the weaknesses in urban design of contemporary British cities and presents an opportunity to study from the implemented examples. Therefore, the purpose of this research is to analyze design approaches towards creating a sustainable and varied urban environment when high-rises are involved. The research questions raised by the study are: what is the quality of high-rises and their surroundings; what facilities and features are deployed in the research area; what is the role of the high-rise buildings in the placemaking process; what urban design principles are applicable in this context. The methodology utilizes observation of the researched area by structured questions, developed by the author to evaluate the outdoor qualities of the high-rise surroundings. In this context, the paper argues that the quality of the public realm around the high-rises is quite low, missing basic but vital elements such as plazas, public art, and seating, along with landscaping and pocket parks. There is lack of coherence, the rhythm of the streets is often disrupted, and even though the high-rises are very aesthetically appealing, they fail to create a sense of place on their own. The implications of the study are that future planning can take into consideration the critique in this article and provide more opportunities for urban design interventions around high-rise buildings in the British cities.

Association between Single Nucleotide Polymorphism of Calpain1 Gene and Meat Tenderness Traits in Different Genotypes of Chicken: Malaysian Native and Commercial Broiler Line

Meat Tenderness is one of the most important factors affecting consumers' assessment of meat quality. Variation in meat tenderness is genetically controlled and varies among breeds, and it is also influenced by environmental factors that can affect its creation during rigor mortis and postmortem. The final postmortem meat tenderization relies on the extent of proteolysis of myofibrillar proteins caused by the endogenous activity of the proteolytic calpain system. This calpain system includes different calcium-dependent cysteine proteases, and an inhibitor, calpastatin. It is widely accepted that in farm animals including chickens, the μ-calpain gene (CAPN1) is a physiological candidate gene for meat tenderness. This study aimed to identify the association of single nucleotide polymorphism (SNP) markers in the CAPN1 gene with the tenderness of chicken breast meat from two Malaysian native and commercial broiler breed crosses. Ten, five months old native chickens and ten, 42 days commercial broilers were collected from the local market and breast muscles were removed two hours after slaughter, packed separately in plastic bags and kept at -20ºC for 24 h. The tenderness phenotype for all chickens’ breast meats was determined by Warner-Bratzler Shear Force (WBSF). Thawing and cooking losses were also measured in the same breast samples before using in WBSF determination. Polymerase chain reaction (PCR) was used to identify the previously reported C7198A and G9950A SNPs in the CAPN1 gene and assess their associations with meat tenderness in the two breeds. The broiler breast meat showed lower shear force values and lower thawing loss rates than the native chickens (p

Threshold Based Region Incrementing Secret Sharing Scheme for Color Images

In this era of online communication, which transacts data in 0s and 1s, confidentiality is a priced commodity. Ensuring safe transmission of encrypted data and their uncorrupted recovery is a matter of prime concern. Among the several techniques for secure sharing of images, this paper proposes a k out of n region incrementing image sharing scheme for color images. The highlight of this scheme is the use of simple Boolean and arithmetic operations for generating shares and the Lagrange interpolation polynomial for authenticating shares. Additionally, this scheme addresses problems faced by existing algorithms such as color reversal and pixel expansion. This paper regenerates the original secret image whereas the existing systems regenerates only the half toned secret image.

Effective Communication with the Czech Customers 50+ in the Financial Market

The paper deals with finding and describing of the effective marketing communication forms relating to the segment 50+ in the financial market in the Czech Republic. The segment 50+ can be seen as a great marketing potential in the future but unfortunately the Czech financial institutions haven´t still reacted enough to this fact and they haven´t prepared appropriate marketing programs for this customers´ segment. Demographic aging is a fundamental characteristic of the current European population evolution but the perspective of further population aging is more noticeable in the Czech Republic. This paper is based on data from one part of primary marketing research. Paper determinates the basic problem areas as well as definition of marketing communication in the financial market, defining the primary research problem, hypothesis and primary research methodology. Finally suitable marketing communication approach to selected sub-segment at age of 50-60 years is proposed according to marketing research findings.

Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic

In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.