In 2015 an important part of the official evaluation of Polish scientific journals was left to experts’ judgement. In this paper we try to establish which observable factors (with available data) are closely related to the outcome of experts’ evaluation of Polish journals in economic sciences. Using the multiple regression statistical model we show that only 5 variables (out of 17) significantly explain almost 50% of the empirical variance of the experts’ evaluation. The determinants of particular interest, not entering the formal criteria and not related to the impact on global science, are: the number of citations mainly in Polish journals and the affiliation with the Polish Academy of Sciences.
Maternal mortality has posed a great problem in the health sector of most African countries. Nigeria’s maternal mortality ratio remains high despite efforts made to meet millennium development goal 5 (MDG5). This study used the Lagos state community health survey 2011 and the Lagos state health budget allocations 2011 to examine the effect of government expenditure on maternal mortality ratio. Factors like inadequate transportation facilities, lack of awareness, inadequate infrastructures, which contribute to high maternal mortality rate, can be traced back to revenue though under different ministries. The other ministries need to work and support the ministry of health in the fight against maternal, especially in Lagos state. Secondary data was compiled from the state budget, records of death in different local governments in the state and relevant reviewed literature. Regression analysis was used to analyze the hypothesis and it was discovered that government expenditure does not have a significant effect on maternal mortality based on the R-square coefficient. However, correlation coefficient gives a contrasting result. Hence, further research work, government expenditure from other local government areas need to be taken into consideration to arrive at a valid conclusion. It is difficult to ascertain how much of the revenue allocated was put to appropriate use, due to a high level of corruption.
This paper analyses the influence of the applied microwave power output on the intensification of drying in the context of process kinetics and product quality. The study involved testing samples of beech wood (Fagus sylvatica L.). Wood samples were dried in the microwave chamber at: 168 W, 210 W, 273 W, 336 W and 378 W power output level. For comparison, wood was dried convectively at 40 ◦C and 87% air relative humidity. The analysis of drying process kinetics involved nonlinear regression employing the Gompertz model. Dried samples were subjected to static bending tests in order to specify the influence of the applied microwave power on modulus of elasticity (MOE) and modulus of rapture (MOR). The obtained correlations of results were verified statistically. Analysis of drying kinetics, strength test results and Tukey’s test showed that the applied microwaves of a relatively low level significantly shortened the drying time, but did not cause a reduction in the final quality of dried wood, compared with conventional drying.
Light-weight Self-Compacting Concrete (LWSCC) might be the answer to the increasing construction requirements of slenderer and more heavily reinforced structural elements. However there are limited studies to prove its ability in real construction projects. In conjunction with the traditional methods, artificial intelligent based modeling methods have been applied to simulate the non-linear and complex behavior of concrete in the recent years. Twenty one laboratory experimental investigations on the mechanical properties of LWSCC; published in recent 12 years have been analyzed in this study. The collected information is used to investigate the relationship between compressive strength, elasticity modulus and splitting tensile strength in LWSCC. Analytically proposed model in ANFIS is verified by multi factor linear regression analysis. Comparing the estimated results, ANFIS analysis gives more compatible results and is preferred to estimate the properties of LWSCC.
One of the basic parameters which describes road traffic is Annual Average Daily Traffic (AADT). Its accurate determination is possible only on the basis of data from the continuous measurement of traffic. However, such data for most road sections is unavailable, so AADT must be determined on the basis of short periods of random measurements. This article presents different methods of estimating AADT on the basis of daily traffic (VOL), and includes the traditional Factor Approach, developed Regression Models and Artificial Neural Network models. As explanatory variables, quantitative variables (VOL and the share of heavy vehicles) as well as qualitative variables (day of the week, month, level of AADT, the cross-section, road class, nature of the area, spatial linking, region of Poland and the nature of traffic patterns) were used. Based on comparisons of the presented methods, the Factor Approach was identified as the most useful.
In the paper a transformation between two height datums (Kronstadt’60 and Kronstadt’86, the latter being a part of the present National Spatial Reference System in Poland) with the use of geostatistical method – kriging is presented. As the height differences between the two datums reveal visible trend a natural decision is to use the kind of kriging method that takes into account nonstationarity in the average behavior of the spatial process (height differences between the two datums). Hence, two methods were applied: hybrid technique (a method combining Trend Surface Analysis with ordinary kriging on least squares residuals) and universal kriging. The background of the two methods has been presented. The two methods were compared with respect to the prediction capabilities in a process of crossvalidation and additionally they were compared to the results obtained by applying a polynomial regression transformation model. The results obtained within this study prove that the structure hidden in the residual part of the model and used in kriging methods may improve prediction capabilities of the transformation model.
Rockburst is a common engineering geological hazard. In order to evaluate rockburst liability in kimberlite at an underground diamond mine, a method combining generalized regression neural networks (GRNN) and fruit fly optimization algorithm (FOA) is employed. Based on two fundamental premises of rockburst occurrence, depth, σθ, σc, σt, B1, B2, SCF, Wet are determined as indicators of rockburst, which are also input vectors of GRNN model. 132 groups of data obtained from rockburst cases from all over the world are chosen as training samples to train the GRNN model; FOA is used to seek the optimal parameter σ that generates the most accurate GRNN model. The trained GRNN model is adopted to evaluate burst liability in kimberlite pipes. The same eight rockburst indicators are acquired from lab tests, mine site and FEM model as test sample features. Evaluation results made by GRNN can be confirmed by a rockburst case at this mine. GRNN do not require any prior knowledge about the nature of the relationship between the input and output variables and avoid analyzing the mechanism of rockburst, which has a bright prospect for engineering rockburst potential evaluation.
Mechanical properties of aluminum-silicon alloys are defined by condition of alloying components in the structure, i.e. plastic metallic matrix created from solid solution on the basis of Al, as well as hard and brittle precipitations of silicon. Size and distribution of silicon crystals are the main factors having effect on field of practical applications of such alloys. Registration of crystallization processes of the alloys on stage of their preparation is directly connected with practical implementation of crystallization theory to controlling technological processes, enabling obtainment of suitable structure of the material and determining its usage for specific requirements. An attempt to evaluate correlation between values of characteristic points laying on crystallization curves and recorded with use of developed by the author TVDA method (commonly denominated as ATND method) is presented in the paper together with assessment of hardness of tested alloy. Basing on characteristic points from the TVDA method, hardness of EN AC-AlSi9Mg alloy modified with strontium has been described in the paper in a significant way by the first order polynomial.
The paper presents a description of used methods and exemplary mathematical models which are classified into theoretical-empirical models of thermal processes. Such models encompass equations resulting from the laws of physics and additional empirical functions describing processes for which analytical models are complex and difficult to develop. The principle of developing, advantages and disadvantages of presented models as well as quality prediction assessment were presented. Mathematical models of a steam boiler, a steam turbine as well as a heat recovery steam generator were described. Exemplary calculation results were presented and compared with measurements.
Weak value amplification is a measurement technique where small quantum mechanical interactions are amplified and manifested macroscopically in the output of a measurement apparatus. It is shown here that the linear nature of weak value amplification provides a straightforward comparative methodology for using the value of a known small interaction to estimate the value of an unknown small interaction. The methodology is illustrated by applying it to quantify the unknown size of an optical Goos-Hänchen shift of a laser beam induced at a glass/gold interface using the known size of the shift at a glass/air interface.
This paper presents a multivariate regression predictive model of drift on the Coordinate Measuring Machine (CMM) behaviour. Evaluation tests on a CMM with a multi-step gauge were carried out following an extended version of an ISO evaluation procedure with a periodicity of at least once a week and during more than five months. This test procedure consists in measuring the gauge for several range volumes, spatial locations, distances and repetitions. The procedure, environment conditions and even the gauge have been kept invariables, so a massive measurement dataset was collected over time under high repeatability conditions. A multivariate regression analysis has revealed the main parameters that could affect the CMM behaviour, and then detected a trend on the CMM performance drift. A performance model that considers both the size of the measured dimension and the elapsed time since the last CMM calibration has been developed. This model can predict the CMM performance and measurement reliability over time and also can estimate an optimized period between calibrations for a specific measurement length or accuracy level.
Air core solenoids, possibly single layer and with significant spacing between turns, are often used to ensure low stray capacitance, as they are used as part of many sensors and instruments. The problem of the correct estimation of the stray capacitance is relevant both during design and to validate measurement results; the expected value is so low to be influenced by any stray capacitance of the external measurement instrument. A simplified method is proposed that does not perturb the stray capacitance of the solenoid under test; the method is based on resonance with an external capacitor and on the use of a linear regression technique.
The Gaussian mixture model (GMM) method is popular and efficient for voice conversion (VC), but it is often subject to overfitting. In this paper, the principal component regression (PCR) method is adopted for the spectral mapping between source speech and target speech, and the numbers of principal components are adjusted properly to prevent the overfitting. Then, in order to better model the nonlinear relationships between the source speech and target speech, the kernel principal component regression (KPCR) method is also proposed. Moreover, a KPCR combined with GMM method is further proposed to improve the accuracy of conversion. In addition, the discontinuity and oversmoothing problems of the traditional GMM method are also addressed. On the one hand, in order to solve the discontinuity problem, the adaptive median filter is adopted to smooth the posterior probabilities. On the other hand, the two mixture components with higher posterior probabilities for each frame are chosen for VC to reduce the oversmoothing problem. Finally, the objective and subjective experiments are carried out, and the results demonstrate that the proposed approach shows greatly better performance than the GMM method. In the objective tests, the proposed method shows lower cepstral distances and higher identification rates than the GMM method. While in the subjective tests, the proposed method obtains higher scores of preference and perceptual quality.
In the paper we present robust estimation methods based on bounded innovation propagation filters and quantile regression, applied to measure Value at Risk. To illustrate advantage connected with the robust methods, we compare VaR forecasts of several group of instruments in the period of high uncertainty on the financial markets with the ones modelled using traditional quasi-likelihood estimation. For comparative purpose we use three groups of tests i.e. based on Bernoulli trial models, on decision making aspect, and on the expected shortfall.
The purpose of this study is to identify relationships between the values of the fluidity obtained by computer simulation and by an experimental test in the horizontal three-channel mould designed in accordance with the Measurement Systems Analysis. Al-Si alloy was a model material. The factors affecting the fluidity varied in following ranges: Si content 5 wt.% – 12 wt.%, Fe content 0.15 wt.% – 0.3wt. %, the pouring temperature 605°C-830°C, and the pouring speed 100 g · s–1 – 400 g · s–1. The software NovaFlow&Solid was used for simulations. The statistically significant difference between the value of fluidity calculated by the equation and obtained by experiment was not found. This design simplifies the calculation of the capability of the measurement process of the fluidity with full replacement of experiments by calculation, using regression equation.
The article discusses the development of an approximation model of selected plastic and mechanical properties obtained from compression tests of model materials used in physical modeling. The use of physical modeling with the use of soft model materials such as a synthetic wax branch with various modifiers is a popular tool used as an alternative or verification of numerical modeling of bulk metal forming processes. In order to develop an algorithm to facilitate the choice of material model to simulate the behavior of real-metallic materials used in industrial production processes the induction of decision trees was used. First of all, the Statistica program was used for data mining, which made it possible to determine / find the relationship between the percentage of particular constituents of the model material (base material and modifiers) and yield strength, critical and maximum strain, and provide the opportunity to indicate the most important variables determining the shape of the stress – strain curve. Next, using the induction of decision trees, an approximation model was developed, which allowed to create an algorithm facilitating the selection of individual modifying components. The last stage of the research was verification of the correctness of the developed algorithm. The obtained research results indicate the possibility of using decision tree induction to approximate selected properties of modeling materials simulating the behavior of real materials, thus eliminating the need for costly and time-consuming experiments carried out on metallic material.
The paper is devoted to discussing consequences of the so-called Frisch-Waugh Theorem to posterior inference and Bayesian model comparison. We adopt a generalised normal linear regression framework and weakenits assumptions in order to cover non-normal, jointly elliptical samplingdistributions, autoregressive specifications, additional nuisance parameters andmulti-equation SURE or VAR models. The main result is that inference basedon the original full Bayesian model can be obtained using transformed dataand reduced parameter spaces, provided the prior density for scale or precisionparameters is appropriately modified.
During the machining processes, heat gets generated as a result of plastic deformation of metal and friction along the tool–chip and tool–work piece interface. In materials having high thermal conductivity, like aluminium alloys, large amount of this heat is absorbed by the work piece. This results in the rise in the temperature of the work piece, which may lead to dimensional inaccuracies, surface damage and deformation. So, it is needed to control rise in the temperature of the work piece. This paper focuses on the measurement, analysis and prediction of work piece temperature rise during the dry end milling operation of Al 6063. The control factors used for experimentation were number of flutes, spindle speed, depth of cut and feed rate. The Taguchi method was employed for the planning of experimentation and L18 orthogonal array was selected. The temperature rise of the work piece was measured with the help of K-type thermocouple embedded in the work piece. Signal to noise (S/N) ratio analysis was carried out using the lower-the-better quality characteristics. Depth of cut was identified as the most significant factor affecting the work piece temperature rise, followed by spindle speed. Analysis of variance (ANOVA) was employed to find out the significant parameters affecting the work piece temperature rise. ANOVA results were found to be in line with the S/N ratio analysis. Regression analysis was used for developing empirical equation of temperature rise. The temperature rise of the work piece was calculated using the regression equation and was found to be in good agreement with the measured values. Finally, confirmation tests were carried out to verify the results obtained. From the confirmation test it was found that the Taguchi method is an effective method to determine optimised parameters for minimization of work piece temperature.
Landfill leachate makes a potential source of ground water pollution. Municipal waste landfill substratum can be used for removal of pollutants from leachate. Model research was performed with use of a sand bed and artificially prepared leachates. Effectiveness of filtration in a bed of specific thickness was assessed based on the total solids content. Result of the model research indicated that the mass of pollutants contained in leachate filtered by a layer of porous soil (mf) depends on the mass of pollutants supplied (md). Determined regression functions indicate agreement with empirical values of variable m′f. The determined regression functions allow for qualitative and quantitative assessment of influence of the analysed independent variables (m′d, l, ω) on values of mass of pollutants flowing from the medium sand layer. Results of this research can be used to forecast the level of pollution of soil and underground waters lying in the zone of potential impact of municipal waste landfill.
In this paper, the results of correlations between air temperature and electricity demand by linear regression and Wavelet Coherence (WTC) approach for three different European countries are presented. The results show a very close relationship between air temperature and electricity demand for the selected power systems, however, the WTC approach presents interesting dynamics of correlations between air temperature and electricity demand at different time-frequency space and provide useful information for a more complete understanding of the related consumption.
The research was concerned with the influence of chemical composition of austenitic steels on their mechanical properties. Resulting properties of castings from austenitic steels are significantly influenced by the solidification time that affects the size of the primary grain as well as the layout of elements within the dendrite and its parts with regard to the last solidification points in the interdendritic melt. During solidification an intensive segregation of all admixtures occurs in the melt, which causes a whole range of serious metallurgical defects and it has also a significant influence on subsequent precipitation of carbides and intermetallic phases. Chemical heterogeneity then affects the structure and mechanical properties of the casting. In a planned experiment, we cast melted steels containing 18 to 28 % Cr and 8 to 28 % Ni with variable carbon and nitrogen contents. Testing the tensile strength of the cast specimens we could determine the Rp0.2, Rm, and A5 values. The dependence of the mechanical properties on the chemical content was described by regression equations. The planned experiment results allow us to control the chemical content for the given austenitic steel quality to achieve the required values of the mechanical properties.
Geometric deviations of free-form surfaces are attributed to many phenomena that occur during machining, both systematic (deterministic) and random in character. Measurements of free-form surfaces are performed with the use of numerically controlled CMMs on the basis of a CAD model, which results in obtaining coordinates of discrete measurement points. The spatial coordinates assigned at each measurement point include both a deterministic component and a random component at different proportions. The deterministic component of deviations is in fact the systematic component of processing errors, which is repetitive in nature. A CAD representation of deterministic geometric deviations might constitute the basis for completing a number of tasks connected with measurement and processing of free-form surfaces. The paper presents the results of testing a methodology of determining CAD models by estimating deterministic geometric deviations. The research was performed on simulated deviations superimposed on the CAD model of a nominal surface. Regression analysis, an iterative procedure, spatial statistics methods, and NURBS modelling were used for establishing the model.
Electroencephalogram (EEG) is one of biomedical signals measured during all-night polysomnography to diagnose sleep disorders, including sleep apnoea. Usually two central EEG channels (C3-A2 and C4- A1) are recorded, but typically only one of them are used. The purpose of this work was to compare discriminative features characterizing normal breathing, as well as obstructive and central sleep apnoeas derived from these central EEG channels. The same methodology of feature extraction and selection was applied separately for the both synchronous signals. The features were extracted by combined discrete wavelet and Hilbert transforms. Afterwards, the statistical indexes were calculated and the features were selected using the analysis of variance and multivariate regression. According to the obtained results, there is a partial difference in information contained in the EEG signals carried by C3-A2 and C4-A1 EEG channels, so data from the both channels should be preferably used together for automatic sleep apnoea detection and differentiation.