Key Note Speech:
Instructional videos for software training. A discussion of relevant theories, designs and empirical studies

Hans van der Meij, University of Twente
Companies such as Apple, IBM, Microsoft, and SAP have rapidly stepped up the production and distribution of instructional videos for their clients. The majority of these videos serve as a job aid, supporting or facilitating task achievement with software. A subset of these videos serve a training purpose; they have the additional goal of enhancing learning. This presentation concentrates on such tutorials for software training.
These videos are examined from three perspectives: Theory, design and empirical research. Among others, theories on multimedia learning and event cognition are presented. These theories give basic insights on how people process video. Based on this literature, we propose a set of 8 domain-specific guidelines for the construction of video tutorials for software training. For each guideline we discuss the main rationale and we illustrate how they find their expression in design. Finally, we give an overview of the empirical research on these guidelines.
Does Detail Matter? Level of Detail in Technical Illustrations and its Effect on Task Execution
Kerstin Alexander, Hochschule Merseburg (University of Applied Sciences)

Stylized technical illustrations - such as line drawings or 2D-graphics - constitute a type of illustration that is used very frequently in instructional materials. One important design decision technical communicators have to make when creating stylized illustrations concerns the amount of detail that the illustration should retain. Depending on the source data from which an illustration is derived, the level of detail can be initially very high. The question then is whether and how much detail should be removed for the illustration to optimally support comprehension and task execution.

In this talk, we report the results of an experimental study addressing the question whether the amount of detail in stylized technical illustrations affects the efficiency of executing the tasks that the illustrations support. We designed three versions of a quick start manual for a standard laser printer. The manuals covered two simple tasks, such as inserting paper into the printer, as well as two complex tasks, such as resolving a paper jam. The procedures for simple tasks consisted of 3 action steps (task 1) and 2 action steps (task 4), and for the complex tasks of 5 steps (task 2) and 11 steps (task 3), respectively. The procedures describing each task relied on pictures only. Each step within a procedure was described using one or more stylized illustrations encoding the respective action and pointing to the relevant parts of the printer involved in the action. Manuals only differed with respect to the level of detail retained in the illustrations: Illustrations of manual 1 contained a maximum level of detail, corresponding to the density of a technical drawing. For manual 2, all details with no obvious relevance for the tasks to be performed were removed. For manual 3, the level of detail was reduced even further, basically keeping only those lines and shapes that were deemed necessary to reliably locate and identify the relevant parts of the printer.

Besides the variable “manual type”, the study also controlled the variable “age” in order to test whether the level of detail in the illustrations differentially affected young users (defined as “below 30”) and elder users (defined as “above 50”). A total of 30 participants took part in the experiment, 15 in each age group. Within each age group, participants were distributed evenly across the three manual types. Time to perform each task and success rate for each task was recorded during the test.

Both success rates (in % correct) and task execution times (in seconds) were averaged by participant across tasks and subjected to an analysis of variance including the factors “manual type” and “age”. The overall mean success rate was 87%. None of the factors reached significance and no interaction between age and manual type was observed. For task execution time, the analysis revealed a main effect of manual type (p<.05). Subsequent pairwise comparisons showed that task execution was faster for manual 2 compared to both manual 1 and manual 3. There was no main effect of age and no interaction between manual type and age. Taken together, the results suggest that the level of detail in technical illustrations indeed matters. Illustrations with medium level of detail support task execution better than illustrations with high or low level of detail. This effect holds both for young and elder users. In the reminder of the talk, we will discuss critical design aspects that possibly contributed to the observed effect as well as design strategies that may help technical communicators to decide on the optimal amount of detail needed.

Designing State-of-the-Art Reference Tools for Technical Writers
Georg Löckinger, University of Applied Sciences Upper Austria

Technical writers use language as the principal means for conveying information to their target audience. This is why they must regularly search for domain-specific information to fulfil their information needs. In doing so, they may face problems such as the following: 1) They need to do their research in a growing number and range of different language resources (terminological databases, text corpora, taxonomies, classification systems, thesauri, dictionaries of various kinds, etc.). 2) The relevant information sources are scattered over different media, or they are not accessible from a single user interface. 3) Consequently, technical writers must use several computer applications that typically have different user interfaces and do not interact with each other in a systematic and ergonomic way.

Basically, technical writers deal with four forms of domain-specific information: object-related information, concept-related information, designation-related information and context-related information. When a technical writer encounters a single difficulty in text production, for example when he or she does not fully understand the concept denoted by a given designation, it might be necessary for him or her to search in several language resources to solve this difficulty. With standard reference tools and language resources, this activity may be more time-consuming than necessary. Furthermore, many standard tools and resources focus on one type of domain-specific information only, while technical writers often need the broader picture.

Based on recent research about translation-oriented special language dictionaries, the present paper draws a parallel between professional translators’ information needs and those of technical writers. A closer look at the latter shows that technical writers’ profile as users of special language reference tools is quite similar to that of professional translators. Taking this user profile as a starting point, it is discussed how a recently published model of dynamic terminology and full-text databases may be implemented in state-of-the-art reference tools for technical writers. Finally, some thoughts are presented on how an innovative language resource bundle could be combined effectively with relevant language technologies in an existing prototype for the management of large amounts of language data.

The Misinterpretation of Complex Sentences
Michael Meng, Hochschule Merseburg (University of Applied Sciences)
Making appropriate design decisions to ensure accurate text comprehension is one of the central challenges for technical communicators. Practitioners therefore need to understand the main processes involved in text comprehension at word, sentence and text level, and they need to know the main factors that facilitate or hinder accurate comprehension at each level.

The research reported in this paper is concerned with comprehension processes at sentence level, i.e.  the processes whose goal is to recover the proposition(s) that a sentence conveys. As most current models assume, sentence comprehension is basically a two-stage process: comprehenders initially construct the syntactic representation of a sentence, which in turn forms the basis for sentence interpretation, such as the identification of the do-er of an action, or the object affected by the action. Factors increasing syntactic complexity, such as passive, non-canonical word order or sentence embeddings, may slow down this process, but ultimately, a complete syntactic representation results, and consequently, a sentence interpretation that is consistent with sentence syntax. This view has been challenged by the “Good Enough Interpretation” model of Ferreira (2003). According to Ferreira, sentence comprehension is initially rather shallow. In their attempt to recover the proposition(s) of a sentence, comprehenders do not rely on a detailed syntactic analysis, but follow plain heuristic strategies, for example that the do-er of an action is mentioned before the affected object. In addition, comprehenders rely on semantic cues and try to construct plausible interpretations, even if such an interpretation is in conflict with the syntactic structure the sentence actually has. A crucial consequence of this model is that syntactic complexity may not just slow down comprehension, but encourage shallow comprehension. Shallow comprehension, however, increases the risk for comprehenders to misinterpret a sentence, especially in situations in which the cognitive resources of the comprehender are taxed anyway, e. g. when a text is read while operating software or a machine.

In this paper, we report the results of an experimental study designed to test how the accuracy of comprehension at sentence level depends on syntactic complexity in German, basically following the method used in Ferreira (2003). The experiment presented simple sentences word-by-word on a computer screen. Sentences varied in syntactic complexity: (i) simple active sentences with subject-object order (“Der Programmstart initialisiert den Controller-Prozess”), (ii) passives (“Der Controller-Prozess wird vom Programmstart initialisiert”) and (iii) object-initial sentences (“Den Controller-Prozess initialisiert der Programmstart”). Moreover, the nouns of each sentence occurred in two orders, with the second order simply reversing the phrases serving as subject and object. For example, “Der Controller-Prozess initialisiert den Programmstart” is order 2 of example (i). To test comprehension, participants were required to name aloud the do-er of the described action (e.g. “Programmstart” in (i)-(iii)) or the affected object (“Controller-Prozess”) after each sentence was presented.

The results show a clear effect of syntactic complexity: Participants made a significant number of comprehension errors with object-subject sentences (iii) and to a lesser extent with passives (ii). The results also show that the effect of syntactic complexity on comprehension accuracy is modulated by semantic plausibility. If semantic plausibility provides a clear clue regarding the do-er of an action and the affected object (as in “Den Stromfluss unterbricht der Schalter”), comprehension accuracy improves. In sum, the results demonstrate that syntactic complexity can in fact lead to misinterpretation of sentences in German. In the remainder of the talk, we will discuss several questions raised by the results, e.g. whether these comprehension errors indeed confirm that comprehenders rely on heuristic strategies, and which design strategies technical communicators can make to reduce the risk of misinterpretation effects.
Optimized Technical Communication: Can We Devise an Efficiency Gauge?
Franziska Heidrich and Klaus Schubert, University of Hildesheim

In technical communication, techniques of optimization are common. We use controlled languages,information structuring, style guides, audience design and internationalization of texts to be translated – let alone the many software systems that support these techniques and procedures.

In the technical communication industry, enhancements and optimizations are often measured by means of managerial and economic criteria. In a simplistic view, these boil down to making processes faster and cheaper. However, these obvious advantages are often less visiblycounterbalanced or even outweighed by losses in quality. While economization is surely not a negligible factor from a managerial point of view, the efficiency of communication shall be in the focus of our linguistic point of view.

The talk outlines a research endeavour in progress aimed at devising a metrics for assessing optimization in technical communication. Our starting point is the theoretical approach taken by a renowned scholar in the field of languages for special purposes, Thorsten Roelcke (Kommunikative Effizienz, 2002). Roelcke not only defines communicative efficiency and its constituents but above that provides definitions for a textual and a systematic efficiency concept. We try to transpose Roelcke’s concept into a real‐life environment of technical communication and technical translation as it is found at workplaces in industry and with service providers. The metrics should include all involved actors such as the customer, the technical communicator and the recipients but also the informants, the authors of style guides, handbooks of technical communication, corporate‐identity handbooks, textbooks, standards, regulations and laws. Further, it should account for all controlling influences such as the assignment brief, best practice and national or corporate culture. The aim of the research endeavour is to provide an assessment concept for external factors influencing the the technical communication and/or the technical translation process such as the practicability of different optimization approaches for different text types, the benefitting participants for different communicative scenarios, the actual influence of the optimization approaches on the documents’comprehensibility and translatability and thus the resulting increase in efficiency. A deduced aim is to prove that comprehensibility can actually be equated with translatability. The concept of efficiency with regards to content and language is to be made assessable.

The metrics should model the fact that an optimum is not the same as a maximum. Rather, an optimum is the best solution achievable in a situation with conditions and influences working in opposed directions. Optimizing means balancing opposed influences.

MacroDico: Improving Scientific and Technical Communication through Corpus-based Terminology
Sandrine Peraldi, ISIT

Modern terminology, especially associated with the development of corpus linguistics, knowledge engineering and information technologies is a major vehicle for scientific and technical communication as interdisciplinarity, transdisciplinarity and complexity spreads among sciences.

Determining the terminology of a given subject field enables cross-comprehension between teachers and learners who need to share a common metalanguage. It also provides researchers with a useful writing aid tool by reflecting the discursive patterns and devices of native speakers and/or members of a given socioprofessional community.
However, a number of terminological products are still perceived and presented as outdated and unsuited to the users’ needs. This can be partially explained by the structuring of these databases. The small number of terminological fields available, the lack of conceptual representation or phraseological data and the prevalence of purely linguistic information thus restrict these tools to only one category of users: that of translators. Multilingual term bases should no longer be thought of as a mere inventory of terms, but as a pedagogical tool and a source of structured information and knowledge.

To illustrate this, the author will present an overview of an online multilingual and terminological knowledge base, MacroDico, developed in the field of nanotechnology. Initiated at the request of the CNRS (France's national scientific research center), this applied research project is the result of several years of collaboration between Master students in translation and in computer engineering, in an effort to put the end user at the heart of our design process.

Supervised by researchers in corpus linguistics and experts in nanotechnologies, this project responded and still responds to specific requests by French scientists for improved communication in English. Indeed, nanotechnology (first, as an emerging science and then as a constantly-evolving discipline) is bound to conceptual and terminological indeterminacy and multidimensionality.

In terms of data modelling, the knowledge base was structured around four main dimensions (term-level, conceptual level, phraseological level, linguistic level) in order to address the many conceptual and linguistic difficulties of such a highly specialised field, enabling a solid understanding of the domain.

Content-wise, the project was carried out following a corpus- and tool-based approach. A corpus of several millions of words was built with a view to undertaking a semi-automatic extraction of the relevant data. The corpus was first analysed at macro-level (genre analysis) to identify the recurring lexical and discursive patterns that characterise the different sub-fields of nanotechnologies. At micro-level, terms were analysed with the help of advanced query systems (e.g. Sketch Engine) in order to identify the key concepts of the domain as well as refined conceptual, phraseological and intercultural data to feed the multilingual term base.

The author of this paper thus proposes to present the methodology and the results of this specific applied project, demonstrating the many contributions of textual terminology in improving scientific and technical communication.

Practitioner Perspectives on Technical Communication Education: A Report from Ireland and Implications for Academic Programme Content
Yvonne Cleary, University of Limerick

A report from TCEurope’s TecDoc-Net project (2005)¹ lists the following core competencies for technical communicators in Europe: communication theory; understanding and using tools; [knowledge of] the regulatory framework; project and process management; information gathering; documentation planning and information development; usability; structuring information; standardisation techniques; professional writing; editing; visual communication; layout and typography. Probably the most significant omission from this list is collaboration: Rainey, Turner and Dayton (2005)² found from surveys that managers rate collaboration as the most important quality in a technical communicator, more important even than writing skills (ranked second).

This presentation discusses the findings of surveys and interviews conducted with technical writers in Ireland, examining their educational backgrounds, typical work tasks and experiences, attitudes to qualifications in technical communication, and recommendations regarding technical communication curricula. The overall purpose of the study was to help determine content to be included in an Irish technical communication curriculum.

Findings from this study reveal the multiple tasks that a technical communicator must master: writing, communicating with developers and subject matter experts, planning, research, editing, information and graphic design, structured authoring, interaction design, aspects of interface design, and the myriad new media roles that are increasingly necessary. Most respondents to the survey and participants in the interviews are educated to postgraduate level (60% have technical communication or related qualifications) and work in private industry, mostly in IT sectors. The findings show that although the role requirements are diverse, key skills include audience analysis, writing, interviewing for information and information design. Areas for curricular development include increased emphasis on rhetoric and writing practice, structured authoring (including DITA), and scope for internships within programmes.

¹TCEurope (2005) “Professional education and training of technical communicators in Europe—guidelines”, TecDoc-Net Project, [online], available: www.tceurope.org/images/stories/downloads/projects/tecdoc.pdf [accessed 19 June 2014].
² Rainey, K.T, Turner, R.K. and Dayton, D. (2005) “Do curricula correspond to managerial expectations? Core competencies for technical communicators”, Technical Communication, 52(3), 323-352.  

The tekom Standard Process for Technical Communication
Michael Fritz CEO of tekom/tekom Europe and Daniela Straub, tekom

In Autumn 2013, a tekom working group with representatives from both the industrial/service sector and academia established a standard process for information development in the field of technical communication. The standard process is an ideal model for actual workflows. Thus, it can serve as a reference for different considerations. It shows the relationship between work activities, tasks and challenges and the knowledge, skills and competencies required from a person working in the field of technical communication. In the future, the standard process will become the basis for defining qualifications for technical communicators as well as for training content and learning goals.