problem of sound radiation from an unflanged duct with mean flow of the medium taking into account existence of all allowable wave modes and, in particular, occurrence of the so-called unstable wave, which results in decay of radiation on and in vicinity of the duct axis. The flow is assumed to be uniform with the source of flow located inside the duct, which is the case frequently occurring in industrial systems. Mathematical considerations, accounting for multimodal and multifrequency excitation and diffraction at the duct outlet, are based on the model of the semi-infinite unflanged hard duct with flow. In the experimental set-up a fan, mounted inside the duct served as the source of flow and noise at the same time modelled as an array of uncorrelated sources of broadband noise, what led to the axisymmetrical shape of the sound pressure directivity characteristics. The theoretical analysis was carried out for the root mean square acoustic pressure in the far-field conditions. Experimental results are presented in the form of the measured pressure directivity characteristics obtained for uniform flow directed inwards and outwards the duct compared to this observed for the zero-flow case. The directivity was measured in one-third octave bands throughout five octaves (500 Hz - 16 kHz) which, for a duct with radius of 0.08 m, corresponds to the range 0.74-23.65 in the reduced frequency ka (Helmholtz number) domain. The results obtained are consistent with theoretical solutions presented by Munt and Savkar, according to whom the weakening of the on-axis and close-to-axis radiation should take place in the presence of medium flow. Experimental results of the present paper indicate that this effect is observed even for the Mach number as low as 0.036.
Convenient human-computer interaction is essential to carry out many exhausting and concentration-demanding activities. One of them is cyber-situational awareness as well as dynamic and static risk analysis. A specific design method for a multimodal human-computer interface (HCI) for cyber-security events visualisation control is presented. The main role of the interface is to support security analysts and network operators in their monitoring activities. The proposed method of designing HCIs is adapted from the methodology of robot control system design. Both kinds of systems act by acquiring information from the environment, and utilise it to drive the devices influencing the environment. In the case of robots the environment is purely physical, while in the case of HCIs it encompasses both the physical ambience and part of the cyber-space. The goal of the designed system is to efficiently support a human operator in the presentation of cyberspace events such as incidents or cyber-attacks. Especially manipulation of graphical information is necessary. As monitoring is a continuous and tiring activity, control of how the data is presented should be exerted in as natural and convenient way as possible. Hence two main visualisation control modalities have been assumed for testing: static and dynamic gesture commands and voice commands, treated as supplementary to the standard interaction. The presented multimodal interface is a component of the Operational Centre, which is a part of the National Cybersecurity Platform. Creation of the interface out of embodied agents proved to be very useful in the specification phase and facilitated the interface implementation.
The article is an attempt to present circumstances that nowadays determine negotiating, conclusion and performance of a multimodal transport contract in Poland. Author focuses in particular on parties’ approach, their business and legal conscience in this respect, as well as their decisions’ practical consequences. Doctrinal aspects of a multimodal transport contract are taken into account only as long as it is essential in examining the most common practices of the parties to the contract. Due to particular character of this publication, the method of author’s views presentation is as brief as possible.
Despite the large number of studies conducted on teachers’ oral corrective feedback, the findings of these studies have been mainly limited to cognitive orientations rooted in experimental designs and the verbal discourse of the teacher as the main object of inquiry. Considering teachers’ affective concerns regarding their corrective feedback and the shift from negative psychology to positive psychology in the field of second/foreign language teaching as well as the entirety of the teacher’s corrective repertoire, in this case study, we aimed to explore the enjoyment building capacity of a teacher’s multimodal corrective feedback in a university general English course. We video-recorded the teacher’s multimodal corrective feedback including verbal and nonverbal semiotic resources like gesture, gaze, and posture while observing the learners’ emotional experiences for eight sessions. We also conducted stimulated recall interviews with some learners and collected their written journals about the experiences of enjoyment with regard to the teacher’s multimodal corrective feedback scenarios. The teacher’s multimodal corrective feedback was analyzed through systemic functional multimodal discourse analysis (SF-MDA) and the content of the interview transcripts as well as the written journals were qualitatively analyzed. The findings indicated that the teacher’s inherent multimodality in his corrective feedback broadened the main dimensions of enjoyment by raising the learners’ attention to their errors, heightening their focus on the correct form, and increasing the salience of his corrective feedback. Further arguments regarding the findings are discussed.
Communication with authorities belongs to a field of research with a long and intensive research tradition. The present paper focuses on the process of understanding in oral institutional communication. It will present some mechanisms by which common understanding is achieved by using different resources. In contrast to the numerous papers dealing with written institutional communication, little work has been carried out on conversations in the administration. Based on Becker-Mrotzek’s (1999, 2001) classification of oral institutional communication into three different types: discourse on con-sultation, objection and application, the present paper focuses on data collection interviews or application discourses (Ger. Datenerhebungsgespräche), which form “the major part of citizen-administration-discourses” (Becker-Mrotzek 1999: 1399). Despite the frequency of these types of discourse, they are the subject of remarkably few studies.
The design and performance analysis of a 1310/1550-nm wavelength division demultiplexer with tapered geometry based on InP/InGaAsP multimode interference (MMI) coupler has been carried out. Wavelength response of demultiplexer of conventional MMI and tapered input and tapered output (tapered I/O) waveguides geometry of the MMI have been discussed. The demultiplexing function has been first performed by choosing a suitable refractive index of the guiding region and geometrical parameters such as the width and length of MMI structure have been achieved. Access width of tapered I/O waveguides have been adjusted to give a low insertion loss (IL) and high extinction ratio (ER) for the considered wavelengths of 1310 nm and 1550 nm. The total size of the demultiplexer has been significantly reduced over the existing MMI devices. Numerical simulations with finite difference beam propagation method are applied to design and optimize the operation of the proposed demultiplexer.
Research work on the design of robust multimodal speech recognition systems making use of acoustic and visual cues, extracted using the relatively noise robust alternate speech sensors is gaining interest in recent times among the speech processing research fraternity. The primary objective of this work is to study the exclusive influence of Lombard effect on the automatic recognition of the confusable syllabic consonant-vowel units of Hindi language, as a step towards building robust multimodal ASR systems in adverse environments in the context of Indian languages which are syllabic in nature. The dataset for this work comprises the confusable 145 consonant-vowel (CV) syllabic units of Hindi language recorded simultaneously using three modalities that capture the acoustic and visual speech cues, namely normal acoustic microphone (NM), throat microphone (TM) and a camera that captures the associated lip movements. The Lombard effect is induced by feeding crowd noise into the speaker’s headphone while recording. Convolutional Neural Network (CNN) models are built to categorise the CV units based on their place of articulation (POA), manner of articulation (MOA), and vowels (under clean and Lombard conditions). For validation purpose, corresponding Hidden Markov Models (HMM) are also built and tested. Unimodal Automatic Speech Recognition (ASR) systems built using each of the three speech cues from Lombard speech show a loss in recognition of MOA and vowels while POA gets a boost in all the systems due to Lombard effect. Combining the three complimentary speech cues to build bimodal and trimodal ASR systems shows that the recognition loss due to Lombard effect for MOA and vowels reduces compared to the unimodal systems, while the POA recognition is still better due to Lombard effect. A bimodal system is proposed using only alternate acoustic and visual cues which gives a better discrimination of the place and manner of articulation than even standard ASR system. Among the multimodal ASR systems studied, the proposed trimodal system based on Lombard speech gives the best recognition accuracy of 98%, 95%, and 76% for the vowels, MOA and POA, respectively, with an average improvement of 36% over the unimodal ASR systems and 9% improvement over the bimodal ASR systems.