Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 49
items per page: 25 50 75
Sort by:

Abstract

Air core solenoids, possibly single layer and with significant spacing between turns, are often used to ensure low stray capacitance, as they are used as part of many sensors and instruments. The problem of the correct estimation of the stray capacitance is relevant both during design and to validate measurement results; the expected value is so low to be influenced by any stray capacitance of the external measurement instrument. A simplified method is proposed that does not perturb the stray capacitance of the solenoid under test; the method is based on resonance with an external capacitor and on the use of a linear regression technique.
Go to article

Abstract

The objective of the paper is to analyse traceability issues in real-life gas flow measurements in complex distribution systems. The initial aim is to provide complete and traceable measurement results and calibration certificates of gas-flow meters, which correspond to specific installation conditions. Extensive work has been done to enable a more credible decision on how to deal in particular situations with the measurement uncertainty which is always subject of a flow meter’s calibration as a quantitative parameter value obtained in laboratory, and with the qualitative statement about the error of an outdoor meter. The laboratory simulation of a complex, real-life distributed system has been designed to achieve the initial aim. As an extension of standardized procedures that refer to the laboratory conditions, the proposed methods introduce additional “installation-specific” error sources. These sources could be either corrected (if identified) or considered as an additional “installation-specific” uncertainty contribution otherwise. The analysis and the results of the experimental work will contribute to more precise and accurate measurement results, thus assuring proper measurements with a known/estimated uncertainty for a specific gas flow installation. Also, the analysis will improve the existing normative documents by here presented findings, as well as fair trade in one of the most important and growing energy consumption areas regarding the legal metrology aspects. These facts will enable comparing the entire quantity of gas at the input of a complex distributed system with the cumulative sum of all individual gas meters in a specific installation.
Go to article

Abstract

The paper formulates some objections to the methods of evaluation of uncertainty in noise measurement which are presented in two standards: ISO 9612 (2009) and DIN 45641 (1990). In particular, it focuses on approximation of an equivalent sound level by a function which depends on the arithmetic average of sound levels. Depending on the nature of a random sample the exact value of the equivalent sound level may be significantly different from an approximate one, which might lead to erroneous estimation of the uncertainty of noise indicators. The article presents an analysis of this problem and the adequacy of the solution depending on the type of a random sample.
Go to article

Abstract

The assessment of the uncertainty of measurement results, an essential problem in environmental acoustic investigations, is undertaken in the paper. An attention is drawn to the - usually omitted - problem of the verification of assumptions related to using the classic methods of the confidence intervals estimation, for the controlled measuring quantity. Especially the paper directs attention to the need of the verification of the assumption of the normal distribution of the measuring quantity set, being the base for the existing and binding procedures of the acoustic measurements assessment uncertainty. The essence of the undertaken problem concerns the binding legal and standard acts related to acoustic measurements and recommended in: 'Guide to the expression of uncertainty in measurement' (GUM) (OIML 1993), developed under the aegis of the International Bureau of Measures (BIPM). The model legitimacy of the hypothesis of the normal distribution of the measuring quantity set in acoustic measurements is discussed and supplemented by testing its likelihood on the environment acoustic results. The Jarque-Bery test based on skewness and flattening (curtosis) distribution measures was used for the analysis of results verifying the assumption. This test allows for the simultaneous analysis of the deviation from the normal distribution caused both by its skewness and flattening. The performed experiments concerned analyses of the distribution of sound levels: LD, LE, LN, LDWN, being the basic noise indicators in assessments of the environment acoustic hazards.
Go to article

Abstract

The main optimized objects in underground mines include: stope layout, access layout and production scheduling. It is common to optimize each component sequentially, where optimal results from one phase are regarded as the input data for the next phase. Numerous methods have been developed and implemented to achieve the optimal solution for each component. In fact, the interaction between different phases is ignored in the tradition optimization models which only get the suboptimal solution compared to the integrated optimization model. This paper proposes a simultaneous integrated optimization model to optimize the three components at the same time. The model not only optimizes the mining layout to maximize the Net Present Value (NPV), but also considers the extension sequence of stope extraction and access excavation. The production capacity and ore quality requirement are also taken into account to keep the mining process stable in all mine life. The model is validated to a gold deposit in China. A two-dimensional block model is built to do the resource estimation due to the clear boundary of the hanging wall and footwall. The thickness and accumulation of each block is estimated by Ordinary Kriging (OK). In addition, the conditional simulation method is utilized to generate a series of orebodies with equal possibility. The optimal solution of optimization model is carried out on each simulated orebody to evaluate the influence of geological uncertainty on the optimal mining design and production scheduling. The risk of grade uncertainty is quantified by the possibility of obtaining the expected NPV. The results indicate that the optimization model has the ability to produce an optimal solution that has a good performance under the uncertainty of grade variability.
Go to article

Abstract

Detection of leakages in pipelines is a matter of continuous research because of the basic importance for a waterworks system is finding the point of the pipeline where a leak is located and − in some cases − a nature of the leak. There are specific difficulties in finding leaks by using spectral analysis techniques like FFT (Fast Fourier Transform), STFT (Short Term Fourier Transform), etc. These difficulties arise especially in complicated pipeline configurations, e.g. a zigzag one. This research focuses on the results of a new algorithm based on FFT and comparing them with a developed STFT technique. Even if other techniques are used, they are costly and difficult to be managed. Moreover, a constraint in the leak detection is the pipeline diameter because it influences accuracy of the adopted algorithm. FFT and STFT are not fully adequate for complex configurations dealt with in this paper, since they produce ill-posed problems with an increasing uncertainty. Therefore, an improved Tikhonov technique has been implemented to reinforce FFT and STFT for complex configurations of pipelines. Hence, the proposed algorithm overcomes the aforementioned difficulties due to applying a linear algebraic approach.
Go to article

Abstract

The main points of the UPoN-2018 talk and some valuable comments from the Audience are briefly summarized. The talk surveyed the major issues with the notion of zero-point thermal noise in resistors and its visibility; moreover it gave some new arguments. The new arguments support the old view of Kleen that the known measurement data “showing” zero-point Johnson noise are instrumental artifacts caused by the energy-time uncertainty principle. We pointed out that, during the spectral analysis of blackbody radiation, another uncertainty principle is relevant, that is, the location-momentum uncertainty principle that causes only the widening of spectral lines instead of the zero-point noise artifact. This is the reason why the Planck formula is correctly confirmed by the blackbody radiation experiments. Finally a conjecture about the zero-point noise spectrum of wide-band amplifiers is shown, but that is yet to be tested experimentally.
Go to article

Abstract

Determination of the phase difference between two sinusoidal signals with noise components using samples of these signals is of interest in many measurement systems. The samples of signals are processed by one of many algorithms, such as 7PSF, UQDE and MSAL, to determine the phase difference. The phase difference result must be accompanied with estimation of the measurement uncertainty. The following issues are covered in this paper: the MSAL algorithm background, the ways of treating the bias influence on the phase difference result, comparison of results obtained by applying MSAL and the other mentioned algorithms to the same real signal samples, and evaluation of the uncertainty of the phase difference.
Go to article

Abstract

Deterministic mechanics has been extensively used by engineers as they needed models that could predict the behavior of designed structures and components. However, modern engineering is now shifting to a new approach where the uncertainty analysis of the model inputs enables to obtain more accurate results. This paper presents an application of this new approach in the field of the stress analysis. In this case, a two-dimensional stress elasticity model is compared with the experimental stress results of five different size tubes measured with resistive strain gages. Theoretical and experimental uncertainties have been calculated by means of the Monte Carlo method and a weighted least square algorithm, respectively. The paper proposes that the analytical engineering models have to integrate an uncertainty component considering the uncertainties of the input data and phenomena observed during the test, that are difficult to adapt in the analytical model. The prediction will be thus improved, the theoretical result being much closer to the real case.
Go to article

Abstract

The synthesis problem for optimal control systems in the class of discrete controls is under consideration. The problem is investigated by reducing to a linear programming (LP) problem with consequent use of a dynamic version of the adaptive method of LP. Both perfect and imperfect information on behavior of control system cases are studied. Algorithms for the optimal controller, optimal estimators are described. Results are illustrated by examples.
Go to article

Abstract

The correlation of data contained in a series of signal sample values makes the estimation of the statistical characteristics describing such a random sample difficult. The positive correlation of data increases the arithmetic mean variance in relation to the series of uncorrelated results. If the normalized autocorrelation function of the positively correlated observations and their variance are known, then the effect of the correlation can be taken into consideration in the estimation process computationally. A significant hindrance to the assessment of the estimation process appears when the autocorrelation function is unknown. This study describes an application of the conditional averaging of the positively correlated data with the Gaussian distribution for the assessment of the correlation of an observation series, and the determination of the standard uncertainty of the arithmetic mean. The method presented here can be particularly useful for high values of correlation (when the value of the normalized autocorrelation function is higher than 0.5), and for the number of data higher than 50. In the paper the results of theoretical research are presented, as well as those of the selected experiments of the processing and analysis of physical signals.
Go to article

Abstract

It is now widely recognized that the evaluation of the uncertainty associated with a result is an essential part of any quantitative analysis. One way to use the estimation of measurement uncertainty as a metrological critical evaluation tool is the identification of sources of uncertainty on the analytical result, knowing the weak steps, in order to improve the method, when it is necessary. In this work, this methodology is applied to fuel analyses and the results show that the relevant sources of uncertainty are: beyond the repeatability, the resolution of the volumetric glassware and the blank in the analytical curve that are little studied.
Go to article

Abstract

Prior knowledge of the autocorrelation function (ACF) enables an application of analytical formalism for the unbiased estimators of variance s2a and variance of the mean s2a(xmacr;). Both can be expressed with the use of so-called effective number of observations neff. We show how to adopt this formalism if only an estimate {rk} of the ACF derived from a sample is available. A novel method is introduced based on truncation of the {rk} function at the point of its first transit through zero (FTZ). It can be applied to non-negative ACFs with a correlation range smaller than the sample size. Contrary to the other methods described in literature, the FTZ method assures the finite range 1 < neff ≤ n for any data. The effect of replacement of the standard estimator of the ACF by three alternative estimators is also investigated. Monte Carlo simulations, concerning the bias and dispersion of resulting estimators sa and sa(×), suggest that the presented formalism can be effectively used to determine a measurement uncertainty. The described method is illustrated with the exemplary analysis of autocorrelated variations of the intensity of an X-ray beam diffracted from a powder sample, known as the particle statistics effect.
Go to article

Abstract

The electrical power drawn by an induction motor is distorted in case of appearance of a certain type of failures. Under spectral analysis of the instantaneous power one obtains the components which are connected with definite types of damage. An analysis of the amplitudes and frequencies of the components allows to recognize the type of fault. The paper presents a metrological analysis of the measurement system used for diagnosis of induction motor bearings, based on the analysis of the instantaneous power. This system was implemented as a set of devices with dedicated software installed on a PC. A number of measurements for uncertainty estimation was carried out. The results of the measurements are presented in the paper. The results of the aforementioned analysis helped to determine the measurement uncertainty which can be expected during bearing diagnostic measurements, by the method relying on measurement and analysis of the instantaneous power of an induction machine.
Go to article

Abstract

Assessment of several noise indicators are determined by the logarithmic mean <img src="/fulltext-image.asp?format=htmlnonpaginated&src=P42524002G141TV8_html\05_paper.gif" alt=""/>, from the sum of independent random results L1; L2; : : : ; Ln of the sound level, being under testing. The estimation of uncertainty of such averaging requires knowledge of probability distribution of the function form of their calculations. The developed solution, leading to the recurrent determination of the probability distribution function for the estimation of the mean value of noise levels and its variance, is shown in this paper.
Go to article

Abstract

The problem of estimation of the long-term environmental noise hazard indicators and their uncertainty is presented in the present paper. The type A standard uncertainty is defined by the standard deviation of the mean. The rules given in the ISO/IEC Guide 98 are used in the calculations. It is usually determined by means of the classic variance estimators, under the following assumptions: the normality of measurements results, adequate sample size, lack of correlation between elements of the sample and observation equivalence. However, such assumptions in relation to the acoustic measurements are rather questionable. This is the reason why the authors indicated the necessity of implementation of non-classical statistical solutions. An estimation idea of seeking density function of long-term noise indicators distribution by the kernel density estimation, bootstrap method and Bayesian inference have been formulated. These methods do not generate limitations for form and properties of analyzed statistics. The theoretical basis of the proposed methods is presented in this paper as well as an example of calculation process of expected value and variance of long-term noise indicators LDEN and LN. The illustration of indicated solutions and their usefulness analysis were constant due to monitoring results of traffic noise recorded in Cracow, Poland.
Go to article

Abstract

The paper focuses on the problem of robust fault detection using analytical methods and soft computing. Taking into account the model-based approach to Fault Detection and Isolation (FDI), possible applications of analytical models, and first of all observers with unknown inputs, are considered. The main objective is to show how to employ the bounded-error approach to determine the uncertainty of soft computing models (neural networks and neuro-fuzzy networks). It is shown that based on soft computing models uncertainty defined as a confidence range for the model output, adaptive thresholds can be described. The paper contains a numerical example that illustrates the effectiveness of the proposed approach for increasing the reliability of fault detection. A comprehensive simulation study regarding the DAMADICS benchmark problem is performed in the final part.
Go to article

Abstract

Under steady-state conditions when fluid temperature is constant, temperature measurement can be accomplished with high degree of accuracy owing to the absence of damping and time lag. However, when fluid temperature varies rapidly, for example, during start-up, appreciable differences occur between the actual and measured fluid temperature. These differences occur because it takes time for heat to transfer through the heavy thermometer pocket to the thermocouple. In this paper, a method for determinig transient fluid temperature based on the first-order thermometer model is presented. Fluid temperature is determined using a thermometer, which is suddenly immersed into boiling water. Next, the time constant is defined as a function of fluid velocity for four sheated thermocouples with different diameters. To demonstrate the applicability of the presented method to actual data where air velocity varies, the temperature of air is estimated based on measurements carried out by three thermocouples with different outer diameters. Lastly, the time constant is presented as a function of fluid velocity and outer diameter of thermocouple.
Go to article

Abstract

A novel approach for treating the uncertainty about the real levels of finished products during production planning and scheduling process is presented in the paper. Interval arithmetic is used to describe uncertainty concerning the production that was planned to cover potential defective products, but meets customer’s quality requirement and can be delivered as fully valuable products. Interval lot sizing and scheduling model to solve this problem is proposed, then a dedicated version of genetic algorithm that is able to deal with interval arithmetic is used to solve the test problems taken from a real-world example described in the literature. The achieved results are compared with a standard approach in which no uncertainty about real production of valuable castings is considered. It has been shown that interval arithmetic can be a valuable method for modeling uncertainty, and proposed approach can provide more accurate information to the planners allowing them to take more tailored decisions.
Go to article

Abstract

This paper proposes a practical tuning of closed loops with model based predictive control. The data assumed to be known from the process is the result of the bump test commonly applied in industry and known in engineering as step response data. A simplified context is assumed such that no prior know-how is required from the plant operator. The relevance of this assumption is very realistic in the context of first time users, both for industrial operators and as educational competence of first hand student training. A first order plus dead time is approximated and the controller parameters immediately follow by heuristic rules. Analysis has been performed in simulation on representative dynamics with guidelines for the various types of processes. Three single-input-single-output experimental setups have been used with no expert users available in different locations – both educational and industrial – these setups are representative for practical cases: a variable time delay dominant system, a non-minimum phase system and an open loop unstable system. Furthermore, in a multivariable control context, a train of separation columns has been tested for control in simulation, followed by experimental tests on a laboratory system with similar dynamics, i.e. a sextuple coupled water tank system. The results indicate the proposed methodology is suitable for hands-on tuning of predictive control loops with some limitations on performance and multivariable process control.
Go to article

This page uses 'cookies'. Learn more