The aim of this paper is to present and explain the metaethical theory proposed by Ayn Rand. In particular, Rand’s view of ethics as necessary for human life is discussed. I also analyze the concept of value which is crucial to Rand’s ethics. I seek to demonstrate that the concept of value is rooted in the concept of life, and from this it follows that the normative sphere is secondary to the existence of living beings. Further, I introduce Rand’s argument concerning human life as an ultimate point of reference in her philosophy. Finally, I explain the conditional character of morality, and end my paper with a short discussion of Rand’s unique view on objectivity of values.
This paper endeavours to study aspects of wave propagation in a random generalized-thermal micropolar elastic medium. The smooth perturbation technique conformable to stochastic differential equations has been employed. Six different types of waves propagate in the random medium. The dispersion equations have been derived. The effects due to random variations of micropolar elastic and generalized thermal parameters have been computed. Randomness causes change of phase speed and attenuation of waves. Attenuation coefficients for high frequency waves have been computed. Second moment properties have been briefly discussed with application to wave propagation in the random micropolar elastic medium. Integrals involving correlation functions have been transformed to radial forms. A special type of generalized thermo-mechanical auto-correlation functions has been used to approximately compute effects of random variations of parameters. Uncoupled problem has been briefly outlined.
The heat supply systems energy efficiency improvement requires the use of increasingly complex methods. The basic ways to reduce heat consumption is by using better thermal insulation, although they have more and more limited possibilities and need relatively large financial outlays. Good effects can be achieved by the better heat source adaptation to the conditions of a specific facility supplied with heat. However, this requires research that identifies the effectiveness of such solutions as well as the tools used to describe selected elements of the system or its entirety. The article presents the results of tests carried out for a gas boiler room supplying heat to a group of residential buildings. The goal was to build a model that would forecast the day range in which the maximum gas consumption occurs for a given day. Having measurements of gas consumption in subsequent hours of the day, it was decided to build a forecasting model determining the part of the day in which such a maximum would occur. To create the model the random forest procedure was used along with the mlr (Kassambara) package. The model’s hyperparameters were tuned based on historical data. Based on data for another period of boiler room operation, the results of the model’s quality assessment were presented. Close to 44% efficiency was achieved. Tuning the model improved its predictive ability.
A trabecular bone consists of trabeculae whose mechanical properties differ significantly from the surrounding marrow, therefore an ultrasonic wave propagating within the bone structure is strongly scattered. The aim of this paper was to evaluate the contribution of the first, second and higher order scattering (multiple scattering) into the total scattering of ultrasound in a trabecular bone. The scattering due to the interconnections between thick trabeculae, usually neglected in trabecular bone models, has been also studied. The basic element in our model of the trabecular bone was an elastic cylinder with a various finite-length and diameter as well as orientation. The applied model was taking into account variation of both, elements size and their spatial configuration. The field scattered on the bone model was evaluated by solving numerically the integral form of the generalized Sturm-Liouville equation describing a scalar wave in inhomogeneous and lossy media. For the scattered fields calculated numerically the effective cross-sections were determined. The influence of absorption on the scattering coefficients was demonstrated. The results allowed to conclude that within the frequency range from 0.5 to 1.5 MHz contribution of the second order scattering to the effective backscattering cross-section is at least 500 times lower than that due to the first order scattering. It was noticed that for a frequency higher than 1.5 MHz fast growth of the backscattering (reflection) coefficients, calculated for the second order scattering, occurs.
Five models and methodology are discussed in this paper for constructing classifiers capable of recognizing in real time the type of fuel injected into a diesel engine cylinder to accuracy acceptable in practical technical applications. Experimental research was carried out on the dynamic engine test facility. The signal of in-cylinder and in-injection line pressure in an internal combustion engine powered by mineral fuel, biodiesel or blends of these two fuel types was evaluated using the vibro-acoustic method. Computational intelligence methods such as classification trees, particle swarm optimization and random forest were applied.
An original model based on first principles is constructed for the temporal correlation of acoustic waves propagating in random scattering media. The model describes the dynamics of wave fields in a previously unexplored, moderately strong (mesoscopic) scattering regime, intermediate between those of weak scattering, on the one hand, and diffusing waves, on the other. It is shown that by considering the wave vector as a free parameter that can vary at will, one can provide an additional dimension to the data, resulting in a tomographic-type reconstruction of the full space-time dynamics of a complex structure, instead of a plain spectroscopic technique. In Fourier space, the problem is reduced to a spherical mean transform defined for a family of spheres containing the origin, and therefore is easily invertible. The results may be useful in probing the statistical structure of various random media with both spatial and temporal resolution.
This paper presents the results of the theoretical and practical analysis of selected features of the function of conditional average value of the absolute value of delayed signal (CAAV). The results obtained with the CAAV method have been compared with the results obtained by method of cross correlation (CCF), which is often used at the measurements of random signal time delay. The paper is divided into five sections. The first is devoted to a short introduction to the subject of the paper. The model of measured stochastic signals is described in Section 2. The fundamentals of time delay estimation using CCF and CAAV are presented in Section 3. The standard deviations of both functions in their extreme points are evaluated and compared. The results of experimental investigations are discussed in Section 4. Computer simulations were used to evaluate the performance of the CAAV and CCF methods. The signal and the noise were Gaussian random variables, produced by a pseudorandom noise generator. The experimental standard deviations of both functions for the chosen signal to noise ratio (SNR) were obtained and compared. All simulation results were averaged for 1000 independent runs. It should be noted that the experimental results were close to the theoretical values. The conclusions and final remarks were included in Section 5. The authors conclude that the CAAV method described in this paper has less standard deviation in the extreme point than CCF and can be applied to time delay measurement of random signals.
Autocorrelation of signals and measurement data makes it difficult to estimate their statistical characteristics. However, the scope of usefulness of autocorrelation functions for statistical description of signal relation is narrowed down to linear processing models. The use of the conditional expected value opens new possibilities in the description of interdependence of stochastic signals for linear and non-linear models. It is described with relatively simple mathematical models with corresponding simple algorithms of their practical implementation.
The paper presents a practical model of exponential autocorrelation of measurement data and a theoretical analysis of its impact on the process of conditional averaging of data. Optimization conditions of the process were determined to decrease the variance of a characteristic of the conditional expected value. The obtained theoretical relations were compared with some examples of the experimental results.
Affective computing studies and develops systems capable of detecting humans affects. The search for universal well-performing features for speech-based emotion recognition is ongoing. In this paper, a small set of features with support vector machines as the classifier is evaluated on Surrey Audio-Visual Expressed Emotion database, Berlin Database of Emotional Speech, Polish Emotional Speech database and Serbian emotional speech database. It is shown that a set of 87 features can offer results on-par with state-of-the-art, yielding 80.21, 88.6, 75.42 and 93.41% average emotion recognition rate, respectively. In addition, an experiment is conducted to explore the significance of gender in emotion recognition using random forests. Two models, trained on the first and second database, respectively, and four speakers were used to determine the effects. It is seen that the feature set used in this work performs well for both male and female speakers, yielding approximately 27% average emotion recognition in both models. In addition, the emotions for female speakers were recognized 18% of the time in the first model and 29% in the second. A similar effect is seen with male speakers: the first model yields 36%, the second 28% a verage emotion recognition rate. This illustrates the relationship between the constitution of training data and emotion recognition accuracy.
This paper presents the results of computer simulations carried out to determine coordination numbers for a system of parallel cylindrical fibres distributed at random in a circular matrix according to twodimensional pattern created by random sequential addition scheme. Two different methods to calculate coordination number were utilized and compared. The first method was based on integration of pair distribution function. The second method was the modified sequential analysis. The calculations following from ensemble average approach revealed that these two methods give very close results for the same neighbourhood area irrespective of the wide range of radii used for calculation.
In this study, emulsified kerosene was investigated to improve the flotation performance of ultrafine coal. For this purpose, NP-10 surfactant was used to form the emulsified kerosene. Results showed that the emulsified kerosene increased the recovery of ultrafine coal compared to kerosene. This study also revealed the effect of independent variables (emulsified collector dosage (ECD), frother dosage (FD) and impeller speed (IS)) on the responses (concentrate yield (γC %), concentrate ash content ( %) and combustible matter recovery (ε %)) based on Random Forest (RF) model and Genetic Algorithm (GA). The proposed models for γC %, % and ε% showed satisfactory results with R2. The optimal values of three test variables were computed as ECD = 330.39 g/t, FD = 75.50 g/t and IS = 1644 rpm by using GA. Responses at these experimental optimal conditions were γC % = 58.51%, % = 21.7% and ε % = 82.83%. The results indicated that GA was a beneficial method to obtain the best values of the operating parameters. According to results obtained from optimal flotation conditions, kerosene consumption was reduced at the rate of about 20% with using the emulsified kerosene.
The constrained averaged controllability of linear one-dimensional heat equation defined on R and R+ is studied. The control is carried out by means of the time-dependent intensity of a heat source located at an uncertain interval of the corresponding domain, the end-points of which are considered as uniformly distributed random variables. Employing the Green’s function approach, it is shown that the heat equation is not constrained averaged controllable neither in R nor in R+. Sufficient conditions on initial and terminal data for the averaged exact and approximate controllabilities are obtained. However, constrained averaged controllability of the heat equation is established in the case of point heat source, the location of which is considered as a uniformly distributed random variable. Moreover, it is obtained that the lack of averaged controllability occurs for random variables with arbitrary symmetric density function.
The aim of the study was to evaluate the possibility of applying different methods of data mining to model the inflow of sewage into the municipal sewage treatment plant. Prediction models were elaborated using methods of support vector machines (SVM), random forests (RF), k-nearest neighbour (k-NN) and of Kernel regression (K). Data consisted of the time series of daily rainfalls, water level measurements in the clarified sewage recipient and the wastewater inflow into the Rzeszow city plant. Results indicate that the best models with one input delayed by 1 day were obtained using the k-NN method while the worst with the K method. For the models with two input variables and one explanatory one the smallest errors were obtained if model inputs were sewage inflow and rainfall data delayed by 1 day and the best fit is provided using RF method while the worst with the K method. In the case of models with three inputs and two explanatory variables, the best results were reported for the SVM and the worst for the K method. In the most of the modelling runs the smallest prediction errors are obtained using the SVM method and the biggest ones with the K method. In the case of the simplest model with one input delayed by 1 day the best results are provided using k-NN method and by the models with two inputs in two modelling runs the RF method appeared as the best.
Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix.
We present a new hash function based on irregularly decimated chaotic map, in this article. The hash algorithm called SHAH is based on two Tinkerbell maps filtered with irregular decimation rule. We evaluated the novel function using distribution analysis, sensitivity analysis, static analysis of diffusion, static analysis of confusion, and collision analysis. The experimental data show that SHAH satisfied valuable level of computer security.
The correlation of data contained in a series of signal sample values makes the estimation of the statistical characteristics describing such a random sample difficult. The positive correlation of data increases the arithmetic mean variance in relation to the series of uncorrelated results. If the normalized autocorrelation function of the positively correlated observations and their variance are known, then the effect of the correlation can be taken into consideration in the estimation process computationally. A significant hindrance to the assessment of the estimation process appears when the autocorrelation function is unknown. This study describes an application of the conditional averaging of the positively correlated data with the Gaussian distribution for the assessment of the correlation of an observation series, and the determination of the standard uncertainty of the arithmetic mean. The method presented here can be particularly useful for high values of correlation (when the value of the normalized autocorrelation function is higher than 0.5), and for the number of data higher than 50. In the paper the results of theoretical research are presented, as well as those of the selected experiments of the processing and analysis of physical signals.
The paper analyses the distorted data of an electronic nose in recognizing the gasoline bio-based additives. Different tools of data mining, such as the methods of data clustering, principal component analysis, wavelet transformation, support vector machine and random forest of decision trees are applied. A special stress is put on the robustness of signal processing systems to the noise distorting the registered sensor signals. A special denoising procedure based on application of discrete wavelet transformation has been proposed. This procedure enables to reduce the error rate of recognition in a significant way. The numerical results of experiments devoted to the recognition of different blends of gasoline have shown the superiority of support vector machine in a noisy environment of measurement.
In this paper the basic methodology of the coupled response-degradation modelling of stochastic dynamical systems is presented
along with the effective analysis of selected problems. First, the general formulation of the problems of stochastic dynamics coupled with the evolution of deterioration process is given. Then some specific degrading oscillatory systems under random excitation are analyzed with a special attention on the systems with fatigue-induced stiffness degradation. Both, the general discussion and the analysis of selected exemplary problems indicate how the reliability of deteriorating stochastic dynamical systems can be assessed.
The aim of the paper is to present a procedure for generating service loading for fatigue tests of materials and structures. The generated loading characterizes desired functions of probability distribution and autocorrelation. The proposed numerical procedure uses MATLAB toolboxes and consists of three steps: (a) generation of a sequence of real numbers with the desired autocorrelation function and with any probability distribution function; (b) generation of loading history with the desired probability distribution function; (c) rearrangement of loading history (mentioned in item b) based on a sequence of real numbers with the desired correlation (mentioned in item a).
The paper investigates Bayesian approach to estimate generalized true random-effects models (GTRE). The analysis shows that under suitably defined priors for transient and persistent inefficiency terms the posterior characteristics of such models are well approximated using simple Gibbs sampling. No model re-parameterization is required. The proposed modification not only allows us to make more reasonable (less informative) assumptions as regards prior transient and persistent inefficiency distribution but also appears to be more reliable in handling especially noisy datasets. Empirical application furthers the research into stochastic frontier analysis using GTRE models by examining the relationship between inefficiency terms in GTRE, true random-effects, generalized stochastic frontier and a standard stochastic frontier model.