In individual dogs, despite good quality of raw sperm, some parameters are significantly changed after thawing, which cannot be predicted. We therefore investigated whether motility parameters objectively obtained by CASA, membrane integrity (MI), cell morphology or a combination are suitable to improve the prediction of bad post-thaw quality. For this purpose 250 sperm analysis protocols from 141 healthy stud dogs, all patients introduced for sperm cryopreservation, were evaluated and a Classification and Regression Tree (CART) -analysis performed. The sperm was routinely collected, analysed, and frozen by using a modified Uppsala system. After thawing, data were routinely examined by using CASA, fluorescent microscopy for membrane integrity (MI) and Hancock’s fixation for evaluation of cell morphology. Samples were sorted by post-thaw progressive motility (P) in good (P > / = 50%, n=135) and bad freezers (P<50%, n=115). Among bad freezers, 73.9% showed in addition post-thaw total morphological abberations of >40% and/or MI <50%. Bad freezers were significantly older than good freezers (p<0.05). Progressive motility (P), velocity curvilinear (VCL), mean coefficient (STR), and linear coefficient (LIN) were potential predictors for post-thaw sperm quality since specifity was best (85.8%) and sensitivity (75.4 %) and accuracy (80.4 %) good. For these objectively measured raw sperm parameters, cut-off values were calculated allowing prediction of bad post-thaw results with high accuracy: P = 83.1 % VCL = 161.3 µm/sec, STR = 0.83 %, and LIN = 0.48 %. Raw sperm samples with values below these cut off values will have below average post-thaw quality with a probability of 85.8%. We conclude that VCL, P, STR and LIN are potential predictors of the outcome of sperm cryopreservation, when combined.
This paper presents the spatial distribution of changes in the value of the predicted insulation index of clothing (Iclp) in the Norwegian Arctic for the period 1971-2000. For this study, data from six meteorological stations were used: Ny-Alesund, Svalbard Airport, Hornsund, Hopen, Bjřrnřya and Jan Mayen. The impact on the atmospheric circulation to the course of the Iclp index was analyzed using the catalogue of circulation types by Niedźwiedź (1993, 2002), the circulation index according to Murray and Lewis (1966) modified by Niedźwiedź (2001), the North Atlantic Oscillation Index according to Luterbacher et al. (1999, 2002), and the Arctic Oscillation Index (Thompson and Wallace 1998).
This paper investigates the application of a novel Model Predictive Control structure for the drive system with an induction motor. The proposed controller has a cascade-free structure that consists of a vector of electromagnetics (torque, flux) and mechanical (speed) states of the system. The long-horizon version of the MPC is investigated in the paper. In order to reduce the computational complexity of the algorithm, an explicit version is applied. The influence of different factors (length of the control and predictive horizon, values of weights) on the performance of the drive system is investigated. The effectiveness of the proposed approach is validated by some experimental tests.
This paper points out that the ARMA models followed by GARCH squares are volatile and gives explicit and general forms of their dependent and volatile innovations. The volatility function of the ARMA innovations is shown to be the square of the corresponding GARCH volatility function. The prediction of GARCH squares is facilitated by the ARMA structure and predictive intervals are considered. Further, the developments suggest families of volatile ARMA processes.
The recent financial crisis has seen huge swings in corporate bond spreads. It is analyzed what quality VAR-based forecasts would have had prior and during the crisis period. Given that forecasts of the mean of interest rates or financial market prices are subject to large uncertainty independent of the class of models used, major emphasis is put on the quality of measures of forecast uncertainty. The VAR considered is based on a model first suggested in the literature in 2005. In a rolling window analysis, both the model’s forecasts and joint prediction bands are calculated making use of recently proposed methods. Besides a traditional analysis of the forecast quality, the performance of the proposed prediction bands is assessed. It is shown that the actual coverage of joint prediction bands is superior to the coverage of naïve prediction bands constructed pointwise.
The paper presents a formula useful for prediction of loss density in soft magnetic materials, which takes into account multi-scale energy dissipation. A universal phenomenological P(Bm, f) relationship is used for loss prediction in chosen soft magnetic materials. A bootstrap method is used to generate additional data points, what makes it possible to increase the prediction accuracy. A substantial accuracy improvement for estimated model parameters is obtained in the case, when additional data points are taken into account. The proposed description could be useful both for device designers and researchers involved in computational electromagnetism.
This research presents a method for the simulation of the magneto-mechanical system dynamics taking motion and eddy currents into account. The major contribution of this work leans on the coupling the field-motion problem considering windings as the current forced massive conductors, modelling of the rotor motion composed of two conductive materials and the torque calculation employing the special optimal predictor combined with the modified Maxwell stress tensor method. The 3D model of the device is analysed by the time stepping finite element method. Mechanical motion of the rotor is determined by solving the second order motion equation. Both magnetic and mechanical equations are coupled in the iterative solving process. Presented method is verified by solving the TEAM Workshop Problem 30.
The author champions the belief that Karl Marx offered a theory of capitalism, and not a theory of socialism. This explains, she argues, why we cannot find a detailed and well-constructed conception of human society that will exist in the future. Marx continued, however, to draw prognostic conclusions from his diagnosis of the capitalist status quo, and his numerous manuscripts are replete with social predictions. They were different at different times, and as the capitalist system tended to change in his lifetime, so changed Marx’s expectations about the future course of events. One thing remained unchanged, however. He always proclaimed the coming of a classless community based on the principle that a free development of each is a necessary prerequisite of a free development of all.
Scientific output analysis in Poland takes place in many ways: by use both central and local databases. The article discusses the contents and bibliometric functions of the most important bibliographic databases, i.e. “People of science”, Polish Scientific Bibliography and employeers local registration system Expertus. The authors evaluate these tools from the perspective of the ability to make comparisons of the effectivity of individual researchers as well as to stimulate the development of scientific careers. As alternative solution relative to the analytical spectrum of all external tools, the authors present own application that allows visualization of scientific achievements. According authors’ observation, the Scientific Visualiser can enrich the individual information space of the contemporary scientist. Dedicated application certainly facilitates the evaluation of the publication activity, increases an awareness of updating of the bibliographic data, helps in discovering the relationship between the research fields, inspires to broaden intelectual horizons and cooperation networks. From another side, it can also be a tool supporting administration activities, such as: employees evaluation, promotion proceedings, accreditation, experts selection, distribution of funds.
Human brain is “the perfect guessing machine” (James V. Stone (2012) Vision and Brain, Cambridge, Mass: The MIT Press, p. 155), trying to interpret sensory data in the light of previous biases or beliefs. Bayesian inference is carried out by three complex networks of the human brain: salience network, central executive network, and default mode network. Their function is analysed both in neurotypical person and Attention Deficit Disorder. Modern human being having predictive brain and overloaded mind must develop social identity, whose evolution went probably through three stages: social selection based on punishment, sexual selection based on reputation, and group selection based on identity.
The article discusses changes in Polish regulations concerning assessment of the climate hazard in underground mines. Currently, the main empirical index representing the heat strain, used in qualification of the workplace to one of the climate hazard levels in Poland is the equivalent climate temperature. This simple heat index allows easy and quick assessment of the climate hazard. To a major extent, simple heat indices have simplifications and are developed for a specific working environments. Currently, the best methods used in evaluation of microclimate conditions in the workplace are those based on the theory of human thermal balance, where the physiological parameters characterising heat strain are body water loss and internal core temperature of the human body. The article describes the results of research on usage of equivalent climate temperature to heat strain evaluation in underground mining excavations. For this purpose, the numerical model of heat exchange between man and his environment was used, taken from PN-EN ISO 7933:2005. The research discussed in this paper has been carried out considering working conditions and clothing insulation in use in underground mines. The analyses performed in the study allowed formulation of conclusions concerning application of the equivalent climate temperature as a criterion of assessment of climate hazards in underground mines.
The average grades of copper mines are dropped by extracting high grade copper ores. Based on the conducted studies in the mine field, the uncertainty of economic calculations and the insufficiency of initial information is observed. This matter has drawn considerations to processing methods which not only extracts low grade copper ores but also decreases adverse environmental impacts. In this research, an optimum cut-off grades modelis developed with the objective function of Net Present Value (NPV) maximization. The costs of the processing methods are also involved in the model. In consequence, an optimization algorithm was presented to calculate and evaluate both the maximum NPV and the optimum cut-off grades. Since the selling price of the final product has always been considered as one of the major risks in the economic calculations and designing of the mines, it was included in the modeling of the price prediction algorithm. The results of the algorithm performance demonstrated that the cost of the lost opportunity and the prediction of the selling price are regarded as two main factors directed into diminishing most of the cut-off grades in the last years of the mines’ production.
The aim of the paper is the comparison of the least squares prediction presented by Heiskanen and Moritz (1967) in the classical handbook “Physical Geodesy” with the geostatistical method of simple kriging as well as in case of Gaussian random fields their equivalence to conditional expectation. The paper contains also short notes on the extension of simple kriging to ordinary kriging by dropping the assumption of known mean value of a random field as well as some necessary information on random fields, covariance function and semivariogram function. The semivariogram is emphasized in the paper, for two reasons. Firstly, the semivariogram describes broader class of phenomena, and for the second order stationary processes it is equivalent to the covariance function. Secondly, the analysis of different kinds of phenomena in terms of covariance is more common. Thus, it is worth introducing another function describing spatial continuity and variability. For the ease of presentation all the considerations were limited to the Euclidean space (thus, for limited areas) although with some extra effort they can be extended to manifolds like sphere, ellipsoid, etc.
For building applications, woven fabrics have been widely used as finishing elements of room interior but not in particular aimed for sound absorbers. Considering the micro perforation of the woven fabrics, they should have potential to be used as micro-perforated panel (MPP) absorbers; some measurement results indicated such absorption ability. Hence, it is of importance to have a sound absorption model of the woven fabrics to enable us predicting their sound absorption characteristic that is beneficial in engineering design phase. Treating the woven fabric as a rigid frame, a fluid equivalent model is employed based on the formulation of Johnson-Champoux-Allard (JCA). The model obtained is then validated by measurement results where three kinds of commercially available woven fabrics are evaluated by considering their perforation properties. It is found that the model can reasonably predict their sound absorption coefficients. However, the presence of perturbations in pores give rise to inaccuracy of resistive component of the predicted surface impedance. The use of measured static flow resistive and corrected viscous length in the calculations are useful to cope with such a situation. Otherwise, the use of an optimized simple model as a function of flow resistivity is also applicable for this case.
The term roughness is used to describe a specific sound sensation which may occur when listening to stimuli with more than one spectral component within the same critical band. It is believed that the spectral components interact inside the cochlea, which leads to fluctuations in the neural signal and, in turn, to a sensation of roughness. This study presents a roughness model composed of two successive stages: peripheral and central. The peripheral stage models the function of the peripheral ear. The central stage predicts roughness from the temporal envelope of the signal processed by the peripheral stage. The roughness model was shown to account for the perceived roughness of various types of acoustic stimuli, including the stimuli with temporal envelopes that are not sinusoidal. It thus accounted for effects of the phase and the shape of the temporal envelope on roughness. The model performance was poor for unmodulated bandpass noise stimuli.
A strip yield model implementation by the present authors is applied to predict fatigue crack growth observed in structural steel specimens under various constant and variable amplitude loading conditions. Attention is paid to the model calibration using the constraint factors in view of the dependence of both the crack closure mechanism and the material stress-strain response on the load history. Prediction capabilities of the model are considered in the context of the incompatibility between the crack growth resistance for constant and variable amplitude loading.
The paper presents local dynamic approach to integration of an ensemble of predictors. The classical fusing of many predictor results takes into account all units and takes the weighted average of the results of all units forming the ensemble. This paper proposes different approach. The prediction of time series for the next day is done here by only one member of an ensemble, which was the best in the learning stage for the input vector, closest to the input data actually applied. Thanks to such arrangement we avoid the situation in which the worst unit reduces the accuracy of the whole ensemble. This way we obtain an increased level of statistical forecasting accuracy, since each task is performed by the best suited predictor. Moreover, such arrangement of integration allows for using units of very different quality without decreasing the quality of final prediction. The numerical experiments performed for forecasting the next input, the average PM10 pollution and forecasting the 24-element vector of hourly load of the power system have confirmed the superiority of the presented approach. All quality measures of forecast have been significantly improved.
This paper presents a predictive torque and flux control algorithm for the synchronous reluctance machine. The algorithm performs a voltage space phasor pre-selection, followed by the computation of the switching instants for the optimum switching space phasors, with the advantages of inherently constant switching frequency and time equidistant implementation on a DSP based system. The criteria used to choose the appropriate voltage space phasor depend on the state of the machine and the deviations of torque and flux at the end of the cycle. The model of the machine has been developed on a d-q frame of coordinates attached to the rotor and takes into account the magnetic saturation in both d-q axes and the cross saturation phenomenon between both axes. Therefore, a very good approximation of this effect is achieved and the performance of the machine is improved. Several simulations and experimental results using a DSP and a commercially available machine show the validity of the proposed control scheme.
This overview paper presents and compares different methods traditionally used for estimating damped sinusoid parameters. Firstly, direct nonlinear least squares fitting the signal model in the time and frequency domains are described. Next, possible applications of the Hilbert transform for signal demodulation are presented. Then, a wide range of autoregressive modelling methods, valid for damped sinusoids, are discussed, in which frequency and damping are estimated from calculated signal linear self-prediction coefficients. These methods aim at solving, directly or using least squares, a matrix linear equation in which signal or its autocorrelation function samples are used. The Prony, Steiglitz-McBride, Kumaresan-Tufts, Total Least Squares, Matrix Pencil, Yule-Walker and Pisarenko methods are taken into account. Finally, the interpolated discrete Fourier transform is presented with examples of Bertocco, Yoshida, and Agrež algorithms. The Matlab codes of all the discussed methods are given. The second part of the paper presents simulation results, compared with the Cramér-Rao lower bound and commented. All tested methods are compared with respect to their accuracy (systematic errors), noise robustness, required signal length, and computational complexity.
A novel VC (voice conversion) method based on hybrid SVR (support vector regression) and GMM (Gaussian mixture model) is presented in the paper, the mapping abilities of SVR and GMM are exploited to map the spectral features of the source speaker to those of target ones. A new strategy of F0 transformation is also presented, the F0s are modeled with spectral features in a joint GMM and predicted from the converted spectral features using the SVR method. Subjective and objective tests are carried out to evaluate the VC performance; experimental results show that the converted speech using the proposed method can obtain a better quality than that using the state-of-the-art GMM method. Meanwhile, a VC method based on non-parallel data is also proposed, the speaker-specific information is investigated using the SVR method and preliminary subjective experiments demonstrate that the proposed method is feasible when a parallel corpus is not available.
The artificial neural network method (ANN) is widely used in both modeling and optimization of manufacturing processes. Determination of optimum processing parameters plays a key role as far as both cost and time are concerned within the manufacturing sector. The burnishing process is simple, easy and cost-effective, and thus it is more common to replace other surface finishing processes in the manufacturing sector. This study investigates the effect of burnishing parameters such as the number of passes, burnishing force, burnishing speed and feed rate on the surface roughness and microhardness of an AZ91D magnesium alloy using different artificial neural network models (i.e. the function fitting neural network (FITNET), generalized regression neural network (GRNN), cascade-forward neural network (CFNN) and feed-forward neural network (FFNN). A total of 1440 different estimates were made by means of ANN methods using different parameters. The best average performance results for surface roughness and microhardness are obtained by the FITNET model (i.e. mean square error (MSE): 0.00060608, mean absolute error (MAE): 0.01556013, multiple correlation coefficient (R): 0.99944545), using the Bayesian regularization process (trainbr)). The FITNET model is followed by the FFNN (i.e. MAE: 0.01707086, MSE: 0.00072907, R: 0.99932069) and CFNN (i.e. MAE: 0.01759166, MSE: 0.00080154, R: 0.99924845) models with very small differences, respectively. The GRNN model has noted worse estimation results (i.e. MSE: 0.00198232, MAE: 0.02973829, R: 0.99900783) as compared with the other models. As a result, MSE, MAE and R values show that it is possible to predict the surface roughness and microhardness results of the burnishing process with high accuracy using ANN models.
At the early stage of information system analysis and design one of the challenge is to estimate total work effort needed, when only small number of analysis artifacts is available. As a solution we propose new method called SAMEE – Simple Adaptive Method for Effort Estimation. It is based on the idea of polynomial regression and uses selected UML artifacts like use cases, actors, domain classes and references between them. In this paper we describe implementation of this method in Enterprise Architect CASE tool and show simple example how to use it in real information system analysis.
There were two aims of the research. One was to enable more or less automatic confirmation of the known associations – either quantitative or qualitative – between technological data and selected properties of concrete materials. Even more important is the second aim – demonstration of expected possibility of automatic identification of new such relationships, not yet recognized by civil engineers. The relationships are to be obtained by methods of Artificial Intelligence, (AI), and are to be based on actual results from experiments on concrete materials. The reason of applying the AI tools is that in Civil Engineering the real data are typically non perfect, complex, fuzzy, often with missing details, which means that their analysis in a traditional way, by building empirical models, is hardly possible or at least can not be done quickly. The main idea of the proposed approach was to combine application of different AI methods in a one system, aimed at estimation, prediction, design and/or optimization of composite materials. The paradigm of the approach is that the unknown rules concerning the properties of concrete are hidden in experimental results and can be obtained from the analysis of examples. Different AI techniques like artificial neural networks, machine learning and certain techniques related to statistics were applied. The data for the analysis originated from direct observations and from reports and publications on concrete technology. Among others it has been demonstrated that by combining different AI methods it is possible to improve the quality of the data, (e.g. when encountering outliers and missing values or in clustering problems), so that the whole data processing system will be giving better prediction, (when applying ANNs), or the newly discovered rules will be more effective, (e.g. with descriptions more complete and – at the same time – possibly more consistent, in case of ML algorithms).