This research presents comprehensive assessment of the precision castings quality made in the Replicast CS process. The evaluation was
made based on quality of the surface layer, shape errors and the accuracy of the linear dimensions. Studies were carried out on the modern
equipment, among other things a Zeiss Calypso measuring machine and profilometer were used. Obtained results allowed comparing lost
wax process models and Replicast CS process.
Effectiveness of operation of a weapon stabilization system is largely dependent on the choice of a sensor, i.e. an accelerometer. The paper identifies and examines fundamental errors of piezoelectric accelerometers and offers measures for their reduction. Errors of a weapon stabilizer piezoelectric sensor have been calculated. The instrumental measurement error does not exceed 0.1 × 10−5 m/s2. The errors caused by the method of attachment to the base, different noise sources and zero point drift can be mitigated by the design features of piezoelectric sensors used in weapon stabilizers.
Time domain analysis is used to determine whether A/D converters that employ higher order sigma-delta modulators, widely used in digital acoustic systems, have superior performance over classical synchronous A/D converters with modulators of first order when taking into account their important metrological property which is the magnitude of the quantization error. It is shown that the quantization errors of delta-sigma A/D converters with higher order modulators are exactly on the same level as for converters with a first order modulator.
DNA sequencing remains one of the most important problems in molecular and computational biology. One of the methods used for this purpose is sequencing by hybridization. In this approach usually DNA chips composed of a full library of oligonucleotides of a given length are used, but in principle it is possible to use another types of chips. Isothermic DNA chips, being one of them, when used for sequencing may reduce hybridization error rate. However, it was not clear if a number of errors following from subsequence repetitions is also reduced in this case. In this paper a method for estimating resolving power of isothermic DNA chips is described which allows for a comparison of such chips and the classical ones. The analysis of the resolving power shows that the probability of sequencing errors caused by subsequence repetitions is greater in the case of isothermic chips in comparison to their classical counterparts of a similar cardinality. This result suggests that isothermic chips should be chosen carefully since in some cases they may not give better results than the classical ones.
The paper presents an interpretation of fractional calculus for positive and negative orders of functions based on sampled measured quantities and their errors connected with digital signal processing. The derivative as a function limit and the Grünwald-Letnikov differintegral are shown in chapter 1 due to the similarity of the presented definition. Notation of fractional calculus based on the gradient vector of measured quantities and its geometrical and physical interpretation of positive and negative orders are shown in chapters 2 and 3.
The accuracy of vehicle speed measured by a speedometer is analysed. The stress on the application of skew normal distribution is laid. The accuracy of measured vehicle speed depends on many error sources: construction of speedometer, measurement method, model inadequacy to real physical process, transferring information signal, external conditions, production process technology etc. The errors of speedometer are analysed in a complex relation to errors of the speed control gauges, whose functionality is based on the Doppler effect. Parameters of the normal distribution and skew normal distribution were applied in the errors analysis. It is shown that the application of maximum permissible errors to control the measuring results of vehicle speed gives paradoxical results when, in the case of skew normal distribution, the standard deviations of higher vehicle speeds are smaller than the standard deviations of lower speeds. In the case of normal distribution a higher speed has a greater standard deviation. For the speed measurements by Doppler speed gauges it is suggested to calculate the vehicle weighted average speed instead of the arithmetic average speed, what will correspond to most real dynamic changes of the vehicle speed parameters.
A novel algorithm is presented to deduce individual nodal forwarding behavior from standard end-to-end acknowledgments. The algorithm is based on a well-established mathematical method and is robust to network related errors and nodal behavior changes. The proposed solution was verified in a network simulation, during which it achieved sound results in a challenging multihop ad-hoc network environment.
Heresy is usually defned as an error concerning the content of faith. In this article heresy is shown as a sin requiring conversion and penance and not just a withdrawal of one’s views. A sin of heresy is compared to adultery or idolatry, for which the same penance used to be assigned (e.g. Synod of Elvira in 306, can. 22). In this context the condemnation of Nestorius by the Council of Ephesus in 431 is characteristic because it is focused on the insult to Jesus Christ and not on erroneous conceptions. It is also the case with the formulas of condemnation of heretics where such invectives as contamination, sacrilegium or perfdia were often used, and those terms belong to the feld of morality rather than to intellectual disputes or differences.
An approach to power system state estimation using a particle filter has been proposed in the paper. Two problems have been taken into account during research, namely bad measurements data and a network structure modification with rapid changes of the state variables. For each case the modification of the algorithm has been proposed. It has also been observed that anti-zero bias modification has a very positive influence on the obtained results (few orders of magnitude, in comparison to the standard particle filter), and additional calculations are quite symbolic. In the second problem, used modification also improved estimation quality of the state variables. The obtained results have been compared to the extended Kalman filter method.
Freeform surfaces have wider engineering applications. Designers use B-splines, Non-Uniform Rational B-splines, etc. to represent the freeform surfaces in CAD, while the manufacturers employ machines with controllers based on approximating functions or splines. Different errors also creep in during machining operations. Therefore the manufactured freeform surfaces have to be verified for conformance to design specification. Different points on the surface are probed using a coordinate measuring machine and substitute geometry of surface established from the measured points is compared with the design surface. The sampling points are distributed according to different strategies. In the present work, two new strategies of distributing the points on the basis of uniform surface area and dominant points are proposed, considering the geometrical nature of the surfaces. Metrological aspects such as probe contact and margins to be provided along the sides have also been included. The results are discussed in terms of deviation between measured points and substitute surface as well as between design and substitute surfaces, and compared with those obtained with the methods reported in the literature.
The secretiveness of sonar operation can be achieved by using continuous frequency-modulated sounding signals with reduced power and significantly prolonged repeat time. The application of matched filtration in the sonar receiver provides optimal conditions for detection against the background of white noise and reverberation, and a very good resolution of distance measurements of motionless targets. The article shows that target movement causes large range measurement errors when linear and hyperbolic frequency modulations are used. The formulas for the calculation of these errors are given. It is shown that for signals with linear frequency modulation the range resolution and detection conditions deteriorate. The use of hyperbolic frequency modulation largely eliminates these adverse effects.
Fractal analysis is one of the rapidly evolving branches of mathematics and finds its application in different analyses such as pore space description. It constitutes a new approach to the issue of their natural irregularity and roughness. To be properly applied, it should be encompassed by an error estimation. The article presents and verifies uncertainties along with imperfections connected with image analysis and expands on the possible ways of their correction. One of key aspects of such research is finding both appropriate place and the number of photos to take. A coarse- grained sandstone thin section was photographed and then pictures were combined into one, bigger image. Fractal parameters distributions show their change and suggest that the accurately gathered group of photos include both highly and less porous regions. Their amount should be representative and adequate to the sample. The resolution influence on the fractal dimension and lacunarity values was examined. For SEM limestone images obtained using backscattered electrons, magnification in the range of 120x to 2000x was used. Additionally, a single pore was examined. The acquired results point to the fact that the values of fractal dimension are similar to a wide range of magnifications, while lacunarity changes each time. This is connected with changing homogeneity of the image. The article also undertakes a problem of determining fractal parameters spatial distribution based on binarization. The available methods assume that it is carried out after or before the image division into rectangles to create fractal dimension and lacunarity values for interpolation. An individual binarization, although time consuming, provides better results that resemble reality to a closer degree. It is not possible to define a single, correct methodology of error elimination. A set of hints has been presented that can improve results of further image analysis of pore space.