Search results

Filters

  • Journals
  • Authors
  • Contributor
  • Keywords
  • Date
  • Type

Search results

Number of results: 209
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

In order to identify the modal parameters of civil structures it is vital to distinguish the defective data from that of appropriate and accurate data. The defects in data may be due to various reasons like defects in the data collection, malfunctioning of sensors, etc. For this purpose Exploratory Data Analysis (EDA) was engaged toenvisage the distribution of sensor’s data and to detect the malfunctioning with in the sensors. Then outlier analysis was performed to remove those data points which may disrupt the accurate data analysis. Then Data Driven Stochastic Sub-space Identification (DATA-SSI) was engaged to perform the modal parameter identification. In the end to validate the accuracy of the proposed method stabilization diagrams were plotted. Sutong Bridge, one of the largest span cable stayed bridge was used as a case study and the suggested technique was employed. The results obtained after employing the above mentioned techniques are very valuable, accurate and effective.

Go to article

Authors and Affiliations

I. Khan
D. Shan
Q. Li
Download PDF Download RIS Download Bibtex

Abstract

A common observation of everyday life reveals the growing importance of data science methods, which are increasingly more and more important part of the mainstream of knowledge generation process. Digital technologies and their potential for data collection and data processing have initiated the birth of the fourth paradigm of science, based on Big Data. Key to these transformations is datafication and data mining that allow the discovery of knowledge from contaminated data. The main purpose of the considerations presented here is to describe the phenomena that make up these processes and indicate their possible epistemological consequences. It has been assumed that increasing datafication tendencies may result in the formation of a data- centric perception of all aspects of reality, making data and the methods of their processing a kind of higher instance shaping human thinking about the world. This research is theoretical in nature. Such issues as the process of datafication and data science have been analyzed with a focus on the areas that raise doubts about the validity of this form of cognition.

Go to article

Authors and Affiliations

Grażyna Osika
Download PDF Download RIS Download Bibtex

Abstract

Decision-making processes, including the ones related to ill-structured problems, are of considerable significance in the area of construction projects. Computer-aided inference under such conditions requires the employment of specific methods and tools (non-algorithmic ones), the best recognized and successfully used in practice represented by expert systems. The knowledge indispensable for such systems to perform inference is most frequently acquired directly from experts (through a dialogue: a domain expert - a knowledge engineer) and from various source documents. Little is known, however, about the possibility of automating knowledge acquisition in this area and as a result, in practice it is scarcely ever used. lt has to be noted that in numerous areas of management more and more attention is paid to the issue of acquiring knowledge from available data. What is known and successfully employed in the practice of aiding the decision-making is the different methods and tools. The paper attempts to select methods for knowledge discovery in data and presents possible ways of representing the acquired knowledge as well as sample tools (including programming ones), allowing for the use of this knowledge in the area under consideration.

Go to article

Authors and Affiliations

J. Szelka
Z. Wrona
Download PDF Download RIS Download Bibtex

Abstract

In the paper the phenomenon of big data is presented. I pay my special attention to the relation of this phenomenon to research work in experimental sciences. I search for answers to two questions. First, do the research methods proposed within the paradigm big data can be applied in experimental sciences? Second, does applying the research methods subject to the big data paradigm lead, in consequence, to a new understanding of science?

Go to article

Authors and Affiliations

Sławomir Leciejewski
Download PDF Download RIS Download Bibtex

Abstract

The purpose of the paper is to analyze the positioning of Ukraine in the global indices of innovative development and competitiveness, to evaluate the indicators of innovation activity and, based on the outcomes of the research, to determine the place of Ukraine in the global innovation space. The dynamics of innovation activity on an international scale based on the consolidated indicators of the Global Innovation Index are presented. Ukraine’s position in it and progress in achieving goals to better understand the processes that stimulate or constrain innovation are determined. Econometric methods to generalize the positioning of Ukraine in the global innovation space and the DEA method to study the relative individual effectiveness of the innovation environment and innovation activities in Europe are used.
Go to article

Authors and Affiliations

Iryna Voronenko
1
ORCID: ORCID
Nataliia Klymenko
2
ORCID: ORCID
Olena Nahorna
3
ORCID: ORCID

  1. National University of Life and Environmental Sciences of Ukraine, Department of Information Systems and Technologies, Ukraine
  2. National University of Life and Environmental Sciences of Ukraine, Department of Economic Cybernetics, Ukraine
  3. National University of Life and Environmental Sciences of Ukraine, Department of Marketing and International Trade, Ukraine
Download PDF Download RIS Download Bibtex

Abstract

Lifetime biographical and publication histories of 2,326 full professors were examined. A combination of administrative, biographical, and bibliometric data was used. Retrospectively constructed productivity, promotion age and speed classes were examined. About 50% of current top productive professors have been top productive throughout their academic careers, over 30–40 years. Topto- bottom and bottom-to-top transitions in productivity classes over academic careers are very rare. We used prestige-normalized productivity in which more weight is given to articles in high-impact than in low-impact journals, recognizing the highly stratified nature of academic science. The combination of biographical and demographic data with raw Scopus publication data from the past 50 years (N = 935,167 articles) made it possible to assign all full professors retrospectively to different productivity, promotion age, and promotion speed classes. In logistic regression models, there were two powerful predictors of belonging to the Top productivity class for full professors: being highly productive as associate professor and as assistant professor (increasing the odds by 180% and 360%). Neither gender nor age (biological or academic) emerged as statistically significant. Our findings have important implications for hiring policies as scientists stay in Polish academia usually for several decades.
Go to article

Authors and Affiliations

Marek Kwiek
1
ORCID: ORCID
Wojciech Roszka
2
ORCID: ORCID

  1. Institute for Advanced Studies in Social Sciences and Humanities (IAS) UAM w Poznaniu
  2. Uniwersytet Ekonomiczny w Poznaniu, Centrum Studiów nad Polityką Publiczną UAM
Download PDF Download RIS Download Bibtex

Abstract

In this paper we analyze the phenomenon of quitting academic science and show how quitting differs between men and women, academic disciplines and over time. The approach presented is comprehensive: global, based on cohorts of scientists, and longitudinal – we observe the publication activity of individual scientists over time. Using metadata from Scopus, a global bibliometric database of publications and citations, we analyze the publication careers of scientists from 38 OECD countries who began publishing in 2000 ( N = 142 776) and 2010 ( N = 232 843). The paper tests the usefulness of large bibliometric datasets for a global analysis of academic careers.
Go to article

Authors and Affiliations

Marek Kwiek
1
ORCID: ORCID
Łukasz Szymula
2
ORCID: ORCID

  1. Institute for Advanced Studies in SocialSciences and Humanities (IAS), Uniwersytet im. Adama Mickiewicza w Poznaniu
  2. Wydział Matematyki i Informatyki, Uniwersytetim. Adama Mickiewicza w Poznaniu
Download PDF Download RIS Download Bibtex

Abstract

We talk to Roman Topór-Mądry, MD, chairman of the PAS Committee on Public health, and Tomasz Zdrojewski, MD, from the Jagiellonian University’s Public Health Institute, coauthors of the first Report on Diabetes in Poland, about counting the number of diabetics and data-gathering techniques.

Go to article

Authors and Affiliations

Roman Topór-Mądry
Tomasz Zdrojewski
Download PDF Download RIS Download Bibtex

Abstract

Wikipedia, one of the world’s most popular websites, owes its success to its authors – i.e. to all of us. But how do we know if the information it offers is reliable?
Go to article

Authors and Affiliations

Włodzimierz Lewoniewski
1

  1. Department of Information SystemsPoznań University of Economics and Business
Download PDF Download RIS Download Bibtex

Abstract

Mathematics offers tools renowned for their objectivity, which is a cornerstone of scientific inquiry. Yet the question arises: how accurately do statistical methods really reflect the complexities of the real world?
Go to article

Authors and Affiliations

Dominik Tomaszewski
1

  1. PAS Institute of Dendrology in Kórnik
Download PDF Download RIS Download Bibtex

Abstract

The problem of poor quality of traffic accident data assembled in national databases has been addressed in European project InDeV. Vulnerable road users (pedestrians, cyclists, motorcyclists and moped riders) are especially affected by underreporting of accidents and misreporting of injury severity. Analyses of data from the European CARE database shows differences between countries in accident number trends as well as in fatality and injury rates which are difficult to explain. A survey of InDeV project partners from 7 EU countries helped to identify differences in their countries in accident and injury definitions as well as in reporting and data checking procedures. Measures to improve the quality of accident data are proposed such as including pedestrian falls in accident statistics, precisely defining minimum injury and combining police accident records with hospital data.

Go to article

Authors and Affiliations

P. Olszewski
B. Osińska
P. Szagała
P. Skoczyński
A. Zielińska
Download PDF Download RIS Download Bibtex

Abstract

Population data are generally provided by state census organisations at the pre- defined census enumeration units. However, these datasets very are often required at user- defined spatial units that differ from the census output levels. A number of population estimation techniques have been developed to address these problems. This article is one of those attempts aimed at improving county level population estimates by using spatial disaggregation models with support of buildings characteristic, derived from national topographic database, and average area of a flat. The experimental gridded population surface was created for Opatów county, sparsely populated rural region located in Central Poland. The method relies on geolocation of population counts in buildings, taking into account the building volume and structural building type and then aggregation the people total in 1 km quadrilateral grid. The overall quality of population distribution surface expressed by the mean of RMSE equals 9 persons, and the MAE equals 0.01. We also discovered that nearly 20% of total county area is unpopulated and 80% of people lived on 33% of the county territory.
Go to article

Authors and Affiliations

Beata Calka
Elżbieta Bielecka
Katarzyna Zdunkiewicz
Download PDF Download RIS Download Bibtex

Abstract

The purpose of this article was to provide the user with information about the number of buildings in the analyzed OpenStreetMap (OSM) dataset in the form of data completeness indicators, namely the standard OSM building areal completeness index (C Index), the numerical completeness index (COUNT Index) and OSM building location accuracy index (TP Index). The official Polish vector database BDOT10k (Database of Topographic Objects) was designated as the reference dataset. Analyses were carried out for Piaseczno County in Poland, differentiated by land cover structure and urbanization level. The results were presented in the form of a bivariate choropleth map with an individually selected class interval suitable for the statistical distribution of the analyzed data. The results confirm that the completeness of OSM buildings close to 100% was obtained mainly in built-up areas. Areas with a commission of OSM buildings were distinguished in terms of area and number of buildings. Lower values of completeness rates were observed in less urbanized areas. The developed methodology for assessing the quality of OSM building data and visualizing the quality results to assist the user in selecting a dataset is universal and can be applied to any OSM polygon features, as well as for peer review of other spatial datasets of comparable thematic scope and detail.
Go to article

Authors and Affiliations

Sylwia Borkowska
1
ORCID: ORCID
Elzbieta Bielecka
1
ORCID: ORCID
Krzysztof Pokonieczny
1
ORCID: ORCID

  1. Military University of Technology, Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

3D maps are becoming more and more popular due not only to their accessibility and clarity of reception, but above all, they provide comprehensive spatial information. Three-dimensional cartographic studies meet the accuracy requirements set for traditional 2D stu-dies, and additionally, they naturally connect the place where the phenomenon occurs with its spatial location. Due to the scale of the objects and difficulties in obtaining comprehensive data using only one source, a frequent procedure is to integrate measurement, cartographic, photo-grammetric information and databases in order to generate a comprehensive study in the form of a 3D map. This paper presents the method of acquiring and processing, as well as, integrating data from TLS and UAVs. Clouds of points representing places and objects are the starting point for the implementation of 3D models of buildings and technical objects, as well as for the con-struction of the Digital Terrain Model. However, in order to supplement the spatial information about the object, the geodetic database of the record of the utilities network was integrated with the model. The procedure performed with the use of common georeferencing, based on the global coordinate system, allowed for the generation of a comprehensive basemap in a three-dimensional form.
Go to article

Authors and Affiliations

Przemyslaw Klapa
1
ORCID: ORCID
Bartosz Mitka
1
ORCID: ORCID
Mariusz Zygmunt
1
ORCID: ORCID

  1. University of Agriculture in Krakow, Krakow, Poland
Download PDF Download RIS Download Bibtex

Abstract

Power big data contains a lot of information related to equipment fault. The analysis and processing of power big data can realize fault diagnosis. This study mainly analyzed the application of association rules in power big data processing. Firstly, the association rules and the Apriori algorithm were introduced. Then, aiming at the shortage of the Apriori algorithm, an IM-Apriori algorithm was designed, and a simulation experiment was carried out. The results showed that the IM-Apriori algorithm had a significant advantage over the Apriori algorithm in the running time. When the number of transactions was 100 000, the running of the IM-Apriori algorithm was 38.42% faster than that of the Apriori algorithm. The IM-Apriori algorithm was little affected by the value of supportmin. Compared with the Extreme Learning Machine (ELM), the IM-Apriori algorithm had better accuracy. The experimental results show the effectiveness of the IM-Apriori algorithm in fault diagnosis, and it can be further promoted and applied in power grid equipment.

Go to article

Authors and Affiliations

Jianguo Qian
Bingquan Zhu
Ying Li
Zhengchai Shi
Download PDF Download RIS Download Bibtex

Abstract

The proper management of water resources is currently an important issue, not only in Poland, but also worldwide. Water resource management involves various activities including monitoring, modelling, assessment and designing the condition and extent of waters sources. The efficient management of water resources is essential, especially in rural areas where it ensures greater stability and efficiency of production in all sectors of the economy and leads to the well-being of the ecosystem.
The performed analyses have demonstrated that the time of origin of the cadastral data defining the course of water boundaries has a significant effect on their quality. Having analysed the factors (timeliness, completeness, redundancy) used to assess the quality of cadastral data, their clear trend of changes in time was noticed. Thus, it is possible to specify the estimated degree of quality of cadastral data defining the course of watercourse boundaries only based on the information about the method, time and area of data origin in the context of the former partition sector.
This research paper presents an original method of assessing the quality of spatial data that is used to determine the course of the shoreline of natural watercourses with unregulated channels flowing through agricultural land.
The research has also demonstrated that in order to increase the efficiency of work, the smallest number of principal factors should be selected for the final analysis. Limiting the analyses to a smaller number of factors does not affect the final result, yet it definitely reduces the amount of work.
Go to article

Authors and Affiliations

Anita Kwartnik-Pruc
1
ORCID: ORCID
Aneta Mączyńska
2
ORCID: ORCID

  1. AGH University of Science and Technology, Faculty of Mining Surveying and Environmental Engineering, al. Adama Mickiewicza 30, 30-059 Kraków
  2. Geodetic and Construction Company “Geo-bud”, 26-220 Stąporków, Poland
Download PDF Download RIS Download Bibtex

Abstract

With the rapid development of remote sensing technology, our ability to obtain remote sensing data has been improved to an unprecedented level. We have entered an era of big data. Remote sensing data clear showing the characteristics of Big Data such as hyper spectral, high spatial resolution, and high time resolution, thus, resulting in a significant increase in the volume, variety, velocity and veracity of data.This paper proposes a feature supporting, salable, and efficient data cube for timeseries analysis application, and used the spatial feature data and remote sensing data for comparative study of the water cover and vegetation change. In this system, the feature data cube building and distributed executor engine are critical in supporting large spatiotemporal RS data analysis with spatial features. The feature translation ensures that the geographic object can be combined with satellite data to build a feature data cube for analysis. Constructing a distributed executed engine based on dask ensures the efficient analysis of large-scale RS data. This work could provide a convenient and efficient multidimensional data services for many remote sens-ing applications.
Go to article

Authors and Affiliations

Yassine Sabri
1
Fadoua Bahja
1
Henk Pet
2

  1. Laboratory of Innovation in Management and Engineering for Enterprise (LIMIE), ISGA Rabat, 27 Avenuel Oqba, Agdal, Rabat, Morocco
  2. Terra Motion Limited, 11 Ingenuity Centre, Innovation Park, Jubilee Campus, University of Nottingham, Nottingham NG7 2TU, UK
Download PDF Download RIS Download Bibtex

Abstract

The paper indicates the significance of the problem of foundry processes parameters stability supervision and assessment. The parameters, which can be effectively tracked and analysed using dedicated computer systems for data acquisition and exploration (Acquisition and Data Mining systems, A&D systems) were pointed out. The state of research and methods of solving production problems with the help of computational intelligence systems (Computational Intelligence, CI) were characterised. The research part shows capabilities of an original A&DM system in the aspect of selected analyses of recorded data for cast defects (effect) forecast on the example of a chosen iron foundry. Implementation tests and analyses were performed based on selected assortments for grey and nodular cast iron grades (castings with 50 kg maximum weight, casting on automatic moulding lines for disposable green sand moulds). Validation tests results, applied methods and algorithms (the original system’s operation in real production conditions) confirmed the effectiveness of the assumptions and application of the methods described. Usability, as well as benefits of using A&DM systems in foundries are measurable and lead to stabilisation of production conditions in particular sections included in the area of use of these systems, and as a result to improvement of casting quality and reduction of defect number.

Go to article

Authors and Affiliations

R. Sika
Z. Ignaszak
Download PDF Download RIS Download Bibtex

Abstract

Slag refining slag with west materials was analysed used the DTA methods. In the paper a method of determining the reduction capability, with the Carbo-N-Ox method, of slag solutions was used. Some relations between the stimulators in the environment - slag - metal system allow to initiate mass exchange reactions in the process of slag refining.The presented in work course of behaviour permits on choice of basic composition of slaglite, the of necessary components stimulating quantities, as well as on accomplishment of opinion of ability refinement. The worked out programme Slag-Prop, after introduction of data with experiment, it allows on next corrections in composition of proposed mixtures also, should be put on properly elaborated factors of multistage reaction with essential usage of suitable stimulators.
Go to article

Authors and Affiliations

A.W. Bydałek
S. Biernat
P. Schlafka
Download PDF Download RIS Download Bibtex

Abstract

Visualizations of mathematical functions have myriad applications in our daily lives, from the economy all the way to medicine.
Go to article

Authors and Affiliations

Paweł Dłotko
1

  1. Institute of Mathematics, Polish Academy of Sciences, Warsaw
Download PDF Download RIS Download Bibtex

Abstract

The paper presents analysis of the possibility of using selected hash functions submitted for the SHA-3 competition in the SDEx encryption method. The group of these functions will include the finalists of the SHA-3 competition, i.e. BLAKE, Grøstl, JH, Keccak, Skein. The aim of the analysis is to develop more secure and faster cryptographic algorithm compared to the current version of the SDEx method with SHA- 512 and the AES algorithm. When considering the speed of algorithms, mainly the software implementation will be taken into account, as it is the most commonly used.
Go to article

Authors and Affiliations

Artur Hłobaż
1

  1. Faculty of Physics and Applied Informatics, University of Lodz, Poland
Download PDF Download RIS Download Bibtex

Abstract

A data warehouse (DW) is a large centralized database that stores data integrated from multiple, usually heterogeneous external

data sources (EDSs). DW content is processed by so called On-Line Analytical Processing applications, that analyze business trends, discover anomalies and hidden dependencies between data. These applications are part of decision support systems. EDSs constantly change their content and often change their structures. These changes have to be propagated into a DW, causing its evolution. The propagation of content changes is implemented by means of materialized views. Whereas the propagation of structural changes is mainly based on temporal extensions and schema evolution, that limits the application of these techniques. Our approach to handling the evolution of a DW is based on schema and data versioning. This mechanism is the core of, so called, a multiversion data warehouse. A multiversion DW is composed of the set of its versions. A single DWversion is in turn composed of a schema version and the set of data described by this schema version. Every DW version stores a DW state which is valid within a certain time period. In this paper we present: (1) a formal model of a multiversion data warehouse, (2) the set of operators with their formal semantics that support a DW evolution, (3) the impact analysis of the operators on DW data and user analytical queries. The presented formal model was a basis for implementing a multiversion DW prototype system.

Go to article

Authors and Affiliations

B. Bębel
Z. Królikowski
R. Wrembel

This page uses 'cookies'. Learn more