Applications in geodesy and engineering surveying require the determination of the heights of the vertical control points in the national and local networks using different techniques. These techniques can be classified as geometric, trigonometric, barometric and Global Positioning System (GPS) levelling. The aim of this study is to analyse height differences obtained from these three techniques using precise digital level and digital level, total station (trigonometric levelling) and GPS which collects phase and code observations (GPS levelling). The accuracies of these methods are analysed. The results obtained show that the precise digital levelling is more stable and reliable than the other two methods. The results of the three levelling methods agree with each other within a few millimetres. The different levelling methods are compared. Geometric levelling is usually accepted as being more accurate than the other methods. The discrepancy between geometric levelling and short range trigonometric levelling is at the level of 8 millimetres. The accuracy of the short range trigonometric levelling is due the reciprocal and simultaneous observations of the zenith angles and slope distances over relative short distances of 250 m. The difference between the ellipsoidal height differences obtained from the GPS levelling used without geoid and the orthometric height differences obtained from precise geometric levelling is 4 millimetres. The geoid model which is obtained from a fifth order polynomial fit of the project area is good enough in this study. The discrepancy between the precise geometric and GPS levelling (with geoid corrections) is 4 millimetres over 5 km.
This research presents comprehensive assessment of the precision castings quality made in the Replicast CS process. The evaluation was
made based on quality of the surface layer, shape errors and the accuracy of the linear dimensions. Studies were carried out on the modern
equipment, among other things a Zeiss Calypso measuring machine and profilometer were used. Obtained results allowed comparing lost
wax process models and Replicast CS process.
The tendencies of modern industry are to increase the quality of manufactured products, simultaneously decreasing production time and cost. The hybrid system combines advantages of the high accuracy of contact CMM and the high measurement speed of non-contact structured light optical techniques. The article describes elements of a developed system together with the steps of the measurement process of the hybrid system, with emphasis on segmentation algorithms. Additionally, accuracy determination of such a system realized with the help of a specially designed ball-plate measurement standard is presented.
The paper concerns the accuracy of determining particle size distributions of the fine-grained materials by means of laser diffraction method. Selection of measuring method for determination of materials granulation depends on various properties of the sample, but mainly on the range of particle size in the sample. It must be taken into consideration that each of the measurement methods inherently generate different information about particle size distribution. The applied measurement method generates the main impact on the results of research because it uses various material properties, like: geometric properties, density or type of the surface (porosity).
Influence of density and particle shape on the results of measurements by laser diffraction was studied in the paper. This method becomes a standard for measuring particle size of mineral powders. Analysis of raw materials particle size distribution was performed using a laser particle-meter Analysette 22. Investigations included measurements of particle size of raw materials characterized by various densities (coal, porphyry, barite) and the shape of the particles (copper shale ore, fly ash from coal combustion). The density of raw materials was determined by helium pycnometer, while the particle shape was expressed by coefficient which was calculated on the basis of particles geometric parameters. Geometry of the grains was measured using an optical microscope with a digital record of images by means of image analysis method. The accuracy of laser granulometric analyzes was expressed by variation coefficient of narrow particle fractions contents. Results of analyzes confirmed that the laser granulometric analysis provides accurate information about the finestparticle size distribution. No significant effect of the material density on the accuracy of granulometric analysis was observed. Effect of particle shape of the tested materials caused more stable values of the variation coefficient for particles of more spherical shape what is related to the applied method of laser measurement. The accuracy of laser granulometric analyzes varies dependably on the measured particle size range of particles. The most accurate analyzed materials are these ones being the part of narrow particle fractions.
Safe mine operations and optimal economical decision making in the context of lignite resources require an adequate level of knowledge about the spatial distribution of critical attributes in terms of geometry and quality in the deposit. Therefore, ore body models are generated using different approaches in geostatistics, depending on the problem to be solved. In this article the analysis of geostatistical methods used for deposits modeling has been presented. Based on exploration data concerning caloric value Q, models of one exemplary lignite deposit has been made. Two models of deposit were prepared using two different methods: ordinary kriging (OK) and sequential Gaussian conditional simulation (SGSIM). Different models of the same deposit were analyzed and compared with source data using criterion of fidelity to statistical attributes like: mean value, variance, statistical distribution. Models, which have been created based on exploration data, were compared with in-situ data gained from survey activities in the exploitation process. As a result of comparison correlation factor and measures of deviations were computed: average relative error, absolute relative error. Models were compared with in situ data, considering statistical features and local variability as well. In conclusion, the study gives valuable information into the benefits of using certain geostatistical approaches for variable tasks and problems in the lignite deposits design process. For the assessment of average values of deposit parameters ordinary kriging provides appropriate effects. Geostatisical simulation (e.g. sequential Gaussian simulation - SGSIM) provides much more relevant information for tasks connected to probability (or risk) of defined threshold exceedences than ordinary kriging. Models made with simulation method are characterized by high fidelity of spatial distribution in comparison to source data.
Water is the main source of daily life for everyone and everywhere in the world. Sufficient water distribution depends on the place and design of water tank in certain areas. Water storage tanks are relatively flexible structures and they can tolerate greater settlements than other engineering structures. Deformation of tanks may cause severe damages to tank or even loss of life and injury to people, so monitoring the structural deformation and dynamic response of water tank and its supporting system to the large variety of external loadings has a great importance for maintaining tank safety and economical design of manmade structures. This paper presents an accurate geodetic observations technique to investigate the inclination of an elevated circular water tank and the deformation of its supporting structural system (supporting columns and circular horizontal beams) using reflector-less total station. The studied water tank was designed to deliver water to around 55000 person and has a storage capacity about 750 m3. Due to the studied water tank age, a non-uniform settlement of tank foundation and movement of pumps and electric machines under tank’s body will cause stress and strain for tanks membrane and settlement of sediments. So the studied water tank can tend to experience movement vertically, horizontally or both. Three epochs of observations were done (July 2014, September 2014 and December 2014). The results of the practical measurements, calculations and analysis of the interesting deformation of the studied elevated tanks and its supporting system using least squares theory and computer programs are presented. As a results of monitoring the water storage tank, circular reinforced concrete beams and columns at three monitoring epochs. The body of water storage tank has an inclination to the east direction and the value of inclination is increased with the time.
Early recognition of altered lactate levels is considered a useful prognostic indicator in dis- ease detection for both human beings and animals. It is reasonable therefore to hypothesize that a portable, point of care (POC) spectrophotometric device for analysis of lactate levels, may have an application for field veterinarians across a range of conditions and diagnostic procedures. In this study, a total of 72 cattle in the transition period underwent POC spectrophotometric lactate measurement with a portable device (The Vet Photometer) in the field, with a small portion of blood used for comparative ELISA evaluation. Lactate measurements were compared using a of Passing-Bablok regression analysis and Bland-Altman plots. The Vet Photometer lactate mea- surement results were in agreement with those generated by the ELISA method. Values for the agreement were derived, in a 95% CI between -1.3 and 0.99, and a positive correlation (r=0.71) between the two measurements. The equation y= 0.68x + 0.60 was achieved using a Pass- ing-Bablok regression analysis. There were no statistical differences in mean values between the measurement methods. In conclusion, a novel veterinary POC spectrophotometric device “Vet Photometer” is an accurate device for evaluation of lactate levels in healthy transition cows.
There is a general agreement that remembering depends not only on the memory processes as such but rather that encoding, storage and retrieval are under the constant influence of the overarching, metacognitive processes. Moreover, many interventions designed to improve memory refer in fact to metacognition. Most attempts to integrate the very different theoretical and experimental approaches in this domain focus on encoding, whereas there is relatively little integration of approaches that focus on retrieval. Therefore, we reviewed the studies that used new ideas to improve memory retrieval due to a “metacognitive intervention”. We concluded that whereas single experimental manipulations were not likely to increase metacognitive ability, more extensive interventions were. We proposed possible theoretical perspectives, namely the Source Monitoring Framework, as a means to integrate the two, so far separate, ways of thinking about the role of metacognition in retrieval: the model of strategic regulation of memory, and the research on appraisals in autobiographical memory. We identified venues for future research which could address, among other issues, integration of these perspectives.
This paper provides analyses of the accuracy and convergence time of the PPP method using GPS systems and different IGS products. The official IGS products: Final, Rapid and Ultra Rapid as well as MGEX products calculated by the CODE analysis centres were used. In addition, calculations with weighting function of the observations were carried out, depending on the elevation angle. The best results were obtained for CODE products, with a 5-minute interval precision ephemeris and precise corrections to satellite clocks with a 30-second interval. For these calculations the accuracy of position determination was at the level of 3 cm with a convergence time of 44 min. Final and Rapid products, which were orbit with a 15-minute interval and clock with a 5 minute interval, gave very similar results. The same level of accuracy was obtained for calculations with CODE products, for which both precise ephemeris and precise corrections to satellite clocks with the interval of 5 minutes. For these calculations, the accuracy was 4 cm with the convergence time of 70 min. The worst accuracy was obtained for calculations with Ultra-rapid products, with an interval of 15 minutes. For these calculations, the accuracy was 10 cm with a convergence time of 120 min. The use of the weighting function improved the accuracy of position determination in each case, except for calculations with Ultra-rapid products. The use of this function slightly increased the convergence time, in addition to the CODE calculation, which was reduced to 9 min.
Stealth is a frequent requirement in military applications and involves the use of devices whose signals are difficult to intercept or identify by the enemy. The silent sonar concept was studied and developed at the Department of Marine Electronic Systems of the Gdansk University of Technology. The work included a detailed theoretical analysis, computer simulations and some experimental research. The results of the theoretical analysis and computer simulation suggested that target detection and positioning accuracy deteriorate as the speed of the target increases, a consequence of the Doppler effect. As a result, more research and measurements had to be conducted to verify the initial findings. To ensure that the results can be compared with those from the experimental silent sonar model, the target's actual position and speed had to be precisely controlled. The article presents the measurement results of a silent sonar model looking at its detection, range resolution and problems of incorrect positioning of moving targets as a consequence of the Doppler effect. The results were compared with those from the theoretical studies and computer simulations.
According to metrological guidelines and specific legal requirements, every smart electronic electricity meter has to be constantly verified after pre-defined regular time intervals. The problem is that in most cases these pre-defined time intervals are based on some previous experience or empirical knowledge and rarely on scientifically sound data. Since the verification itself is a costly procedure it would be advantageous to put more effort into defining the required verification periods. Therefore, a fixed verification interval, recommended by various internal documents, standardised evaluation procedures and national legislation, could be technically and scientifically more justified and consequently more appropriate and trustworthy for the end user. This paper describes an experiment to determine the effect of alternating temperature and humidity and constant high current on a smart electronic electricity meter’s measurement accuracy. Based on an analysis of these effects it is proposed that the current fixed verification interval could be revised, taking into account also different climatic influence. The findings of this work could influence a new standardized procedure in respect of a meter’s verification interval.