Traffic classification is an important tool for network management. It reveals the source of observed network traffic and has many potential applications e.g. in Quality of Service, network security and traffic visualization. In the last decade, traffic classification evolved quickly due to the raise of peer-to-peer traffic. Nowadays, researchers still find new methods in order to withstand the rapid changes of the Internet. In this paper, we review 13 publications on traffic classification and related topics that were published during 2009-2012. We show diversity in recent algorithms and we highlight possible directions for the future research on traffic classification: relevance of multi-level classification, importance of experimental validation, and the need for common traffic datasets.
In the article we study a model of network transmissions with Active Queue Management in an intermediate IP router. We use the OMNET++ discrete event simulator to model the varies variants of the CHOKe algoithms. We model a system where CHOKe, xCHOKe and gCHOKe are the AQM policy. The obtained results shows the behaviour of these algorithms. The paper presents also the implementation of AQM mechanisms in the router based on Linux.
We propose the time slot routing, a novel routing scheme that allows for a simple design of interconnection networks. The simulative results show that the proposed scheme demonstrates optimal performance at the maximal uniform network load, and for uniform loads the network throughput is greater than for deflection routing.
In the article we study a model of TCP connection with Active Queue Managementin an intermediate IP router. We use the fluid flow approximation technique to model the interactions between the set of TCP flows and AQM algoithms. Computations for fluid flow approximation model are performed in the CUDA environment.
The paper presents a new ontology-based approach to the elaboration and management of evidences prepared by developers for the IT security evaluation process according to the Common Criteria standard. The evidences concern the claimed EAL (Evaluation Assurance Level) for a developed IT product or system, called TOE (Target of Evaluation), and depend on the TOE features and its development environment. Evidences should be prepared for the broad range of IT products and systems requiring assurance. The selected issues concerning the author’s elaborated ontology are discussed, such as: ontology domain and scope definition, identification of terms within the domain, identification of the hierarchy of classes and their properties, creation of instances, and an ontology validation process. This work is aimed at the development of a prototype of a knowledge base representing patterns for evidences.
Paper presents methodology of generation of the uniform, compressive or tensile stresses in ring-shaped amorphous alloys core. In this study we use the set of special nonmagnetic, cylindrical backings. These backings enabled the core to be wound as well as create the possibility of generation of uniform compressive and tensile stresses. Using presented methodology the magnetoelastic characteristics were experimentally determined for Fe40Ni38Mo4B18 Metglas 2826 MB amorphous alloy. Knowledge about these properties is important for users of inductive components with amorphous alloys cores due to the fact, that changes of flux density due to magnetoelastic effect exceed 40%.
This paper concerns the possibility of use the Jiles-Atherton extended model to describe the magnetic characteristics of construction steel St3 under mechanical stress. Results of the modelling utilizing extended Jiles-Atherton model are consistent with results of experimental measurements for magnetic hysteresis loops B(H). Material stress state determination by using non-destructive, magnetic properties based on testing techniques is an especially important problem.
At the early stage of information system analysis and design one of the challenge is to estimate total work effort needed, when only small number of analysis artifacts is available. As a solution we propose new method called SAMEE – Simple Adaptive Method for Effort Estimation. It is based on the idea of polynomial regression and uses selected UML artifacts like use cases, actors, domain classes and references between them. In this paper we describe implementation of this method in Enterprise Architect CASE tool and show simple example how to use it in real information system analysis.
This paper presents non-linear mathematical model of a computer network with a part of wireless network. The article contains an analysis of the stability of the network based on TCP-DCR, which is a modification of the traditional TCP. Block diagram of the network model was converted to a form in order to investigate the D-stability using the method of the space of uncertain parameters. Robust D-stability is calculated for constant delays values.
Some data sets contain data clusters not in all dimension, but in subspaces. Known algorithms select attributes and identify clusters in subspaces. The paper presents a novel algorithm for subspace fuzzy clustering. Each data example has fuzzy membership to the cluster. Each cluster is defined in a certainsubspace, but the the membership of the descriptors of the cluster to the subspace (called descriptor weight) is fuzzy (from interval [0,1]) – the descriptors of the cluster can have partial membership to a subspace the cluster is defined in. Thus the clusters are fuzzy defined in their subspaces. The clusters are defined by their centre, fuzziness and weights of descriptors. The clustering algorithm is based on minimizing of criterionfunction. The paper is accompanied by the experimental results of clustering. This approach can be used for partition of input domain in extraction rule base for neuro-fuzzy systems.
In wireless mobile networks, a client can move between different locations while staying connected to the network and access the remote server over the mobile networks by using their mobile de- vices at anytime and anywhere. However, the wireless network is more prone to some security attacks, as it does not have the ingrained physical security like wired networks. Thus, the client authentication is required while accessing the remote server through wireless network. Based on elliptic curve cryptosystem (ECC) and identity-based cryptography (IBC), Debiao et al. proposed an ID-based client authentication with key agreement scheme to reduce the computation and communication loads on the mobile devices. The scheme is suitable for mobile client-server environments, is secure against different attacks and provides mutual authentication with session key agreement between a client and the remote server as they claimed. Unfotunately, this paper demonstrates that Debiao et al.’s scheme is vulnerable some cryptographic attacks, and proposed an improved ID-based client authentication with key agreement scheme using ECC. The proposed scheme is secure based on Elliptic Curve Discrete Logarithm Problem (ECDLP) and Computational Diffie-Helmann Problem (CDHP). The detail analysis shows that our scheme overcomes the drawbacks of Debiao et al.’s scheme and achieves more functionality for the client authentication with lesser computational cost than other schemes.
The predicted annual growth of energy consumption in ICT by 4% towards 2020, despite improvements and efficiency gains in technology, is challenging our ability to claim that ICT is providing overall gains in energy efficiency and Carbon Imprint as computers and networks are increasingly used in all sectors of activity. Thus we must find means to limit this increase and preserve quality of service (QoS) in computer systems and networks. Since the energy consumed in ICT is related to system load, ]this paper discusses the choice of system load that offers the best trade-off between energy consumption and QoS. We use both simple queueing models and measurements to develop and illustrate the results. A discussion is also provided regarding future research directions.
The problem that this paper investigates, namely, optimization of overlay computing systems, follows naturally from growing need for effective processing and consequently, fast development of various distributed systems. We consider an overlay-based computing system, i.e., a virtual computing system is deployed on the top of an existing physical network (e.g., Internet) providing connectivity between computing nodes. The main motivation behind the overlay concept is simple provision of network functionalities (e.g., diversity, flexibility, manageability) in a relatively cost-effective way as well as regardless of physical and logical structure of underlying networks. The workflow of tasks processed in the computing system assumes that there are many sources of input data and many destinations of output data, i.e., many-to-many transmissions are used in the system. The addressed optimization problem is formulatedin the form of an ILP (Integer Linear Programing) model. Since the model is computationally demanding and NP-complete, besides the branch-and-bound algorithm included in the CPLEX solver, we propose additional cut inequalities. Moreover, we present and test two effective heuristic algorithms: tabu search and greedy. Both methods yield satisfactory results close to optimal.