The process of designing and creating an integrated distributed information system for storing digitized works of scientists of research institutes of the Almaty academic city is analyzed. The requirements for the storage of digital objects are defined; a comparative analysis of the open source software used for these purposes is carried out. The system fully provides the necessary computing resources for ongoing research and educational processes, simplifying the prospect of its further development, and allows to build an advanced IT infrastructure for managing intellectual capital, an electronic library that is intended to store all books and scientific works of the Kazakhstan Engineering Technological University and research institutes of the Almaty academic city.
This paper deals with a methodology for the implementation of cloud manufacturing (CM) architecture. CM is a current paradigm in which dynamically scalable and virtualized resources are provided to users as services over the Internet. CM is based on the concept of coud computing, which is essential in the Industry 4.0 trend. A CM architecture is employed to map users and providers of manufacturing resources. It reduces costs and development time during a product lifecycle. Some providers use different descriptions of their services, so we propose taking advantage of semantic web technologies such as ontologies to tackle this issue. Indeed, robust tools are proposed for mapping providers’ descriptions and user requests to find the most appropriate service. The ontology defines the stages of the product lifecycle as services. It also takes into account the features of coud computing (storage, computing capacity, etc.). The CM ontology will contribute to intelligent and automated service discovery. The proposed methodology is inspired by the ASDI framework (analysis–specification–design–implementation), which has already been used in the supply chain, healthcare and manufacturing domains. The aim of the new methodology is to propose an easy method of designing a library of components for a CM architecture. An example of the application of this methodology with a simulation model, based on the CloudSim software, is presented. The result can be used to help the industrial decision-makers who want to design CM architectures.
The paper aims at the higher reactive power management complexity caused by the access of distributed power, and the problem such as large data exchange capacity, low accuracy of reactive power distribution, a slow convergence rate, and so on, may appear when the controlled objects are large. This paper proposes a reactive power and voltage control management strategy based on virtual reactance cloud control. The coupling between active power and reactive power in the system is effectively eliminated through the virtual reactance. At the same time, huge amounts of data are treated to parallel processing by using the cloud computing model parallel distributed processing, realize the uncertainty transformation between qualitative concept and quantitative value. The power distribution matrix is formed according to graph theory, and the accurate allocation of reactive power is realized by applying the cloud control model. Finally, the validity and rationality of this method are verified by testing a practical node system through simulation.
Recently, Google Earth Engine (GEE) provides a new way to effectively classify land cover utilizing available in-built classifiers. However, there have a few studies on the applications of the GEE so far. Therefore, the goal of this study is to explore the capacity of the GEE platform in terms of land cover classification in Dien Bien Province of Vietnam. Land cover classification in the year of 2003 and 2010 were performed using multiple-temporal Landsat images. Two algorithms – GMO Max Entropy and Classification and Regression Tree (CART) integrated into the Google Earth Engine (GEE) plat-form – were applied for this classification. The results indicated that the CART algorithm performed better in terms of mapping land use. The overall accuracy of this algorithm in the year of 2003 and 2010 were 80.0% and 81.6%, respective-ly. Significant changes between 2003 and 2010 were found as an increase in barren land and a reduction in forest land. This is likely due to the slash-and-burn agricultural practice of ethnic minorities in the province. Barren land seems to occur more at locations near water sources, reflecting the local people’s unsuitable farming practice. This study may provide use-ful information in land cover change in Dien Bien Province, as well as analysis mechanisms of this change, supporting en-vironmental and natural resource management for the local authorities.
The problem of performing software tests using Testing-as-a-Service cloud environment is considered and formulated as an~online cluster scheduling on parallel machines with total flowtime criterion. A mathematical model is proposed. Several properties of the problem, including solution feasibility and connection to the classic scheduling on parallel machines are discussed. A family of algorithms based on a new priority rule called the Smallest Remaining Load (SRL) is proposed. We prove that algorithms from that family are not competitive relative to each other. Computer experiment using real-life data indicated that the SRL algorithm using the longest job sub-strategy is the best in performance. This algorithm is then compared with the Simulated Annealing metaheuristic. Results indicate that the metaheuristic rarely outperforms the SRL algorithm, obtaining worse results most of the time, which is counter-intuitive for a metaheuristic. Finally, we test the accuracy of prediction of processing times of jobs. The results indicate high (91.4%) accuracy for predicting processing times of test cases and even higher (98.7%) for prediction of remaining load of test suites. Results also show that schedules obtained through prediction are stable (coefficient of variation is 0.2‒3.7%) and do not affect most of the algorithms (around 1% difference in flowtime), proving the considered problem is semi-clairvoyant. For the Largest Remaining Load rule, the predicted values tend to perform better than the actual values. The use of predicted values affects the SRL algorithm the most (up to 15% flowtime increase), but it still outperforms other algorithms.
The research was aimed at analysing the factors that affect the accuracy of merging point clouds when scanning over longer distances. Research takes into account the limited possibilities of target placement occurring while scanning opposite benches of quarries or open-pit mines, embankments from opposite banks of rivers etc. In all these cases, there is an obstacle/void between the scanner and measured object that prevents the optimal location of targets and enlarging scanning distances. The accuracy factors for cloud merging are: the placement of targets relative to the scanner and measured object, the target type and instrument range. Tests demonstrated that for scanning of objects with lower accuracy requirements, over long distances, it is optimal to choose flat targets for registration. For objects with higher accuracy requirements, scanned from shorter distances, it is worth selecting spherical targets. Targets and scanned object should be on the same side of the void.
The paper presents the idea of a prosumer energy cloud as a new service dedicated to electricity prosumers. The implementation of the cloud should generate a number of benefits in the following areas: settlements between prosumer and electricity supplier, the development of distributed energy sources in microprocessors and the development of e-mobility. From the prosumer point of view, the proposed idea of a prosumer cloud of energy is dedicated to the virtual storage of energy excess generated in the micro-installation. Physical energy storage in the cloud means recording the volume of electricity introduced into the electricity system from the prosumer’s microprocessors. It is assumed that the energy equivalent to the volume registered in the prosumer cloud can be used at any time at any point in the network infrastructure of the National Power System. Any point of network infrastructure shall be understood as any locally located point of connection of an electricity consumer provided with access authorization. From the point of view of the power grid operators, the idea of a prosumer energy cloud is a conceptual proposition of a service dedicated to the new model of the power system functioning, taking future conditions concerning the significant development of prosumer energy and e-mobility into account. In this concept, electricity would be treated as a commodity only to partial physical storage and above all to trade. In this model a key aspect would be virtual energy storage, that is, the commercial provision by the cloud operator (trading company) of any use of the electricity portfolio by its suppliers. It should be stressed, however, that in the prosumer’s energy cloud functioning, a significant factor would be the cost of guarantees of the use of energy by prosumers at any time and point of connection to the network. This results in the need of taking the presence of certain market risks, both volumetric and cost incurred by clouds operator, which can be minimized by passing a portion of the accumulated volume of generated energy to the cloud operator into account. It should be emphasized that this article presents the first phase of the development of the concept of prosumer energy cloud. However, it is planned to be expanded by the following stages, which include the possibility of controlling and supervising the operation of prosumer installations such as: sources, receivers and physical energy stores, e.g. home energy storage or batteries installed in electric vehicles. Ultimately, it is assumed that the proposed prosumer energy cloud will be outside of the storage of energy (virtual and partly physical) and that aggregation of prosumer resources will create new possibilities for their use to provide a variety of regulatory services, including system ones.
Afeeder automation (FA) system is usually used by electricity utilities to improve power supply reliability. The FA system was realized by the coordinated control of feeder terminal units (FTUs) in the electrical power distribution network. Existing FA testing technologies can only test basic functions of FTUs, while the coordinated control function among several FTUs during the self-healing process cannot be tested and evaluated. In this paper, a novel cloud-based digital-physical testing method is proposed and discussed for coordinated control capacity test of the FTUs in the distribution network. The coordinated control principle of the FTUs in the local-reclosing FA system is introduced firstly and then, the scheme of the proposed cloud-based digital-physical FA testing method is proposed and discussed. The theoretical action sequences of the FTUs consisting of the FTU under test and the FTUs installed in the same feeder are analyzed and illustrated. The theoretical action sequences are compared with the test results obtained by the realized cloud-based simulation platform and the digital-physical hybrid communication interaction. The coordinated control capacity of the FTUs can be evaluated by the comparative result. Experimental verification shows that the FA function can be tested efficiently and accurately based on our proposed method in the power distribution system inspection.