In this paper we show how formal computer science concepts—such as encoding, algorithm or computability—can be interpreted philosophically, including ontologically and epistemologically. Such interpretations lead to questions and problems, the working solutions of which constitute some form of pre-philosophical worldview. In this work we focus on questions inspired by the IT distinction between digitality and analogicity, which has its mathematical origin in the mathematical distinction between discreteness and continuity. These include the following questions: 1) Is the deep structure of physical reality digital or analog, 2) does the human mind resemble a more digital or analog computational system, 3) does the answer to the second question give us a cognitively fruitful insight into the cognitive limitations of the mind? As a particularly important basis for the above questions, we consider the fact that the computational power (i.e., the range of solvable problems) of some types of analog computations is greater than that of digital computations.
Green spaces are an integral element of urban structures. They are not only a place of rest for their users, but also positively affect their well-being and health. The eff ect of these spaces, is the better, the smoother they create larger urban layout – stings of greenery. The introduction of urban greenery can and should be one of the basic elements of revitalization. Often, however, greenery is designed without multi-aspect analysis, enabling understanding of conditions and the use of existing potential in a given place. The use of computational design in conjunction with the use of generally available databases, such as numerical SRTM terrain models, publicly available OSM map database and EPW meteorological data, allows for the design of space in a more comprehensive way. These design methods allow better matching of the greenery design in a given area to specific architectural, urban and environmental conditions.
Computational modeling plays an important role in the methodology of contemporary science. The epistemological role of modeling and simulations leads to questions about a possible use of this method in philosophy. Attempts to use some mathematical tools to formulate philosophical concepts trace back to Spinoza and Newton. Newtonian natural philosophy became an example of successful use of mathematical thinking to describe the fundamental level of nature. Newton’s approach has initiated a new scientific field of research in physics and at the same time his system has become a source of new philosophical considerations about physical reality. According to Michael Heller, some physical theories may be treated as the formalizations of philosophical conceptions. Computational modeling may be an extension of this idea; this is what I would like to present in the article. I also consider computational modeling in philosophy as a source of new philosophical metaphors; this idea has been proposed in David J. Bolter’s conception of defining technology. The consideration leads to the following conclusion: In the methodology of philosophy significant changes have been taking place; the new approach do not make traditional methods obsolete, it is rather a new analytical tools for philosophy and a source of inspiring metaphors.
Disk motors are characterized by the axial direction of main magnetic flux and the variable length of the magnetic flux path along varying stator/rotor radii. This is why it is generally accepted that reliable electromagnetic calculations for such machines should be carried out using the FEM for 3D models. The 3D approach makes it possible to take into account an entire spectrum of different effects. Such computational analysis is very time-consuming, this is in particular true for machines with one magnetic axis only. An alternate computational method based on a 2D FEM model of a cylindrical motor is proposed in the paper. The obtained calculation results have been verified by means of lab test results for a physical model. The proposed method leads to a significant decrease of computational time, i.e. the decrease of iterative search for the most advantageous design.
The problem of performing software tests using Testing-as-a-Service cloud environment is considered and formulated as an~online cluster scheduling on parallel machines with total flowtime criterion. A mathematical model is proposed. Several properties of the problem, including solution feasibility and connection to the classic scheduling on parallel machines are discussed. A family of algorithms based on a new priority rule called the Smallest Remaining Load (SRL) is proposed. We prove that algorithms from that family are not competitive relative to each other. Computer experiment using real-life data indicated that the SRL algorithm using the longest job sub-strategy is the best in performance. This algorithm is then compared with the Simulated Annealing metaheuristic. Results indicate that the metaheuristic rarely outperforms the SRL algorithm, obtaining worse results most of the time, which is counter-intuitive for a metaheuristic. Finally, we test the accuracy of prediction of processing times of jobs. The results indicate high (91.4%) accuracy for predicting processing times of test cases and even higher (98.7%) for prediction of remaining load of test suites. Results also show that schedules obtained through prediction are stable (coefficient of variation is 0.2‒3.7%) and do not affect most of the algorithms (around 1% difference in flowtime), proving the considered problem is semi-clairvoyant. For the Largest Remaining Load rule, the predicted values tend to perform better than the actual values. The use of predicted values affects the SRL algorithm the most (up to 15% flowtime increase), but it still outperforms other algorithms.
Short-circuit analysis is conducted based on the nodal impedance matrix, which is the inversion of the nodal admittance matrix. If analysis is conducted for sliding faults, then for each fault location four elements of the nodal admittance matrix are subject to changes and the calculation of the admittance matrix inversion needs to be repeated many times. For large-scale networks such an approach is time consuming and unsatisfactory. This paper proves that for each new fault location a new impedance matrix can be found without recalculation of the matrix inversion. It can be found by a simple extension of the initial nodal impedance matrix calculated once for the input model of the network. This paper derives formulas suitable for such an extension and presents a flowchart of the computational method. Numerical tests performed for a test power system confirm the validity and usefulness of the proposed method.