Abstracts

Situated LSP translation from a cognitive translational perspective
Ralph Krüger, Cologne University of Applied Sciences (Germany)

Since its inception in the 1950s, modern translation studies has undergone a development which led from a perspective on linguistic systems via a perspective on texts to a perspective on translational action and the person of the translator. Within the subfield of cognitive translation studies, a similar development can be observed. Starting from an ‘ideal translator’, whose mind served as some kind of storage box for various linguistically-oriented translation algorithms, the epistemic interest has widened gradually to include the translator’s individual knowledge requirements and finally the situatedness of the translator in real professional environments.


In this presentation, I take a cognitive translational perspective on the field of LSP translation. In a first step, I will survey the three cognitive scientific paradigms of symbol manipulation, connectionism and situated cognition (the currently dominating paradigm in cognitive science) and discuss the influence of these three paradigms on the development of cognitive translation studies. Then, I will present the Cologne Model of the Situated LSP Translator (Krüger 2015), which draws on situated cognition and the corresponding translation theory of situated translation (Risku 2004). This model attempts to give a comprehensive account of the relevant factors influencing the LSP translator’s cognitive performance in real-world translation settings. It lists various artefacts in the translator’s working environment which, according to situated translation, form an integral part of the ‘translational ecosystem’ and hence of the translator’s cognition. I will discuss some of these artefacts and illustrate how these artefacts may influence the translator’s performance in professional contexts and which practically relevant research perspectives can be derived from them.


With regard to the context of the European Academic Colloquium, I will also attempt to build a bridge between the fields of LSP translation and technical writing. In specialised communication studies, it is often claimed that these two fields show more similarities than differences and that they are increasingly converging (e.g. Schubert 2007).  To do justice to this convergence, I will identify several components of the Cologne model which could be readily applied to the situated technical writer.

A tracking study on technical content navigation behavior
Geert Brône, University of Leuven (Belgium)
Birgitta Meex, University of Leuven (Belgium)

Background: Digitalization and connectivity are affecting all aspects of life and industry. The Digital Revolution also impacts technical communication in that it establishes a new kind of interaction between users and a product. User expectations are changing accordingly. Customers not only ask for better search capability, they also want content on mobile devices as well as videos, audio, and animations. For example, the DCL and CIDM 2015 Trends Survey observes a move from PDF to HTML publishing. However, trends surveys and recent studies (Straub 2015) reveal that most content is still published in printed form or as PDFs that are delivered through the corporate website. One of the often heard shortcomings of PDF (and paper) in comparison to online documentation are its limited navigation options and multimedia support and low degree of responsiveness (Fritz & Klumpp 2015).


Objectives: In order to gain more insight into the pros and cons of PDFs and into how information is gathered in a user manual, it is helpful to test empirically how users handle documentation. So far little research has been done to investigate this. To fill this research gap, we conducted an experimental study using screen tracking software to track the dynamics of users’ navigation paths through a PDF document. The objective of this pilot study is to assess participants’ viewing, scrolling, clicking and tapping behavior, their choices and lingering time and relate the obtained information to subsequent task performance (viz. an information search task) at a later stage (Meex & Brône in press).
Research questions: Among the questions addressed are: How do users navigate through web-based technical content space while performing a typical exploration task? Which particular contents do they look at in which order and how much time do they spend lingering on those contents? Do participants prefer PDFs or videos to gather (new) technical information? Can the preliminary findings of this small-scale study confirm the above mentioned latest industry trends? What can content designers learn from users’ navigation behavior to create user-friendly technical content and guide the development of future formats?


Method: To answer these questions, we set up a controlled experiment with 12 participants in a typical task-oriented setting using video tracking software. Participants’ screen (inter)actions were recorded, using the screen-capturing program Camtasia Studio. The recorded data were analyzed both manually and by using the multimedia annotation tool ELAN.


Results: Navigation behavior seems to be characterized by a linear reading mode, whereby participants tended to read the manual from beginning to the end, very little direct navigation to non-adjacent sections of the text or clicking to other available resources. Another observation was that participants, in general, did not interact with the textual or graphical content provided in the document by hovering over or highlighting specific segments. With respect to lingering time the analysis revealed that only in some cases, there was a significantly longer dwelling time on pages containing special points of interest, such as warning messages, safety-related information, tips & tricks, and an overview of the content. And finally, another striking (but not surprising) observation is that as time goes by motivation seemed to decrease. Regarding the question whether participants view the user information in an order that was different from the order in which the user information was (randomly) presented to them, our observations show that 2/3 of them made no alternative delivery format choices.

Rule-based machine translation in a student’s project – analyzing and optimizing the output by means of pre-editing the source text
Marion Wittkowsky, Flensburg University of Applied Sciences (Germany)

Today’s requirements for future Technical Translators and Technical Writers have changed since the turn of the millennium. The reasons for this change are, among others, newly developed language technologies, vastly changing working environments for language specialists, and the fact that highly complex machinery and procedures require that much more technical documentation be written and translated. Torrejón and Rico (2002) describe an approach to teaching translation students pre-editing and post-editing of texts for machine translation systems at the Universidad Europea in Madrid.


At the time of their publication, including pre-editing could definitely be seen as a new aspect in the education of translators. The MT approach of pre-editing texts by applying controlled language is a good one because as a side-effect, the quality of source documents increases.


At the University of Applied Sciences in Flensburg, Germany, our Master programme in International Technical Communication includes a course project in which MT is being taught based on the following research questions:
“Does controlled language applied to source texts affect the quality of target texts produced by rule-based MT? What differences can be achieved using different writing styles?”


In the last few years, we have been working with Lucy LT (machine translation solution from Lucy Software), which allows students to add or edit terminology in an integrated lexicon.
During the project the students, among other things, perform analysis and evaluation of machine translated technical texts (DE and/or EN), add or edit terminology as required, resolve syntactical ambiguities, and edit the texts in several steps with a translation following every completed step.


The results of these numerous translations are sometimes very different, depending on how far the students go when trying to find the best way to adapt the source text. Of course, short sentences have a better output and avoiding ambiguities solves many problems. However, this cannot be achieved by applying controlled language only.
One further aspect is also very important in the project. “What do we expect from a text that has been translated using rule-based MT?” The students learn how the MT system functions and try to adapt the source texts accordingly to obtain an optimized output. Yet, the quality levels after all the changes have been made vary enormously.


Although we have yet to find a universal solution to generate high-quality machine translation output, students gain a broad understanding of rule-based machine translation. Master students who are educated technical translators or technical writers learn to reflect on the advantages and disadvantages of accepting lower quality target texts, provided that the texts are understandable for a specific target audience. In addition, students obtain first-hand experience in improving source texts through appropriate editing. Should a post-editing step be required, translators who are
involved in the whole process can easily execute this step. For post-editing, well educated translators are needed as it is not enough to be a native speaker.

From Interpreting Studies to Translation Studies: A model for quality assessment
Maurizio Viezzi, University of Trieste (Italy)

“Quality is an elusive concept, if ever there was one” (Shlesinger 1997:123). Yet it is a goal shared by all those who are involved in whatever capacity in the translation process or happen to get in contact with the translation product.

House says that “it seems unlikely [...] that translation quality assessment can ever be completely objectified in the manner of the results of natural science subjects” (1977: 64), but Hatim & Mason strike a more positive note when they suggest to “elaborate a set of parameters for analysis which aim to promote consistency and precision in the discussion of translating and translations” (1990: 5).

The paper will present a translation quality model based on four such parameters. The model was actually developed for interpreting, but it should come as no surprise that it is being proposed for translation as well since, whatever the differences between the two, translation and interpreting share the same nature and functions.

The model is based on the following principles:

  • quality may be said to consist of the features enabling a product or a service to meet specific goals or needs (cf. AFNOR in Lord 1993);
  • the concept of quality is therefore relative, rather than absolute;
  • translation may be seen as source-text induced target-text production for a third party (cf. Neubert 2000: 10);
  • the translator’s goals may be said to be:
    • equivalence, i.e. the production of a target text (TT) that is equivalent to the source text (ST) in terms of communicative function and overall sense;
    • accuracy, i.e. the production of a TT that accurately reformulates the information content of the ST;
    • appropriateness, i.e. the production of a TT that meets the norms governing the type of text being translated and meets the addressee’s needs and expectations;
    • usability, i.e. the production of a TT that is usable – easily read and understood, with particular reference to such aspects as coherence, cohesion etc.;
  • since quality may also be seen as a function of the attainment of goals, the aforementioned goals also serve as parameters for translation assessment;
  • translation quality will depend on the extent to which the translator manages to produce a TT that is equivalent to the ST, accurate, appropriate and usable.


The paper will present the model and discuss it with reference to specific examples of technical translation.

An illustrated technical text in translation:
Choice Network Analysis as a tool for depicting word-image interaction
Anne Ketola, University of Tampere (Finland)

Illustrations are an integral feature of technical communication. When these multimodal products are translated, translators process both verbal and visual information, referred to in this study as the verbal source text and the visual source text. Yet, translation studies has yet to assess if and how images are involved in the translator’s interpretation of the multimodal source text and, consequently, translation choices. The research interest of the present study lies in word-image interaction in the translation process of illustrated technical texts. The data of the study consists of eight translations of an illustrated technical text, presenting the operating principles of two types of separation devices used in the mining industry for ore beneficiation. The data was produced by a group of Master’s level translations students during a technical translation course from English to Finnish.


The study set out to test one possible method of inquiring into the effect of word-image interaction on the translators’ choices, namely Choice Network Analysis (CNA). CNA compares the translations of the same source text by multiple translators: Different translation solutions are collected into a network-like flowchart, which allows to empirically derive the options, the set of possible solutions, that were available to the translators when translating each verbal item. When employed to the data of the study, the choice networks represent the options that the multimodal source text offered for the translators. The study set out to assess if these options were based entirely on verbal information or on a negotiation of meaning from two different modes.


The analysis of the data implied that the multimodal source text offered options that the verbal source text on its own did not. In other words, the translators processed both modes of the source text, and formed their translation solutions based on information negotiated from the combination of both modes. When translating the names of the different parts of the equipment, for instance, some of the options displayed by the choice networks accorded to the verbal information as closely as is possible between two different languages. Yet, some of the options deviated from the verbal information to lesser or greater degree. A comparison to the visual source text revealed that these options, in fact, corresponded to the way the information was represented in the image. The study found that visual information could modify verbal information; in the most extreme cases, visual information could cause verbal information being disregarded altogether. All in all, the study concluded that the images were capable of reattributing the meaning of verbal items in translation.

An indexical analysis of multilingual communication: Identifying issues for French writers in English
Dacia Dressen-Hammouda, Blaise Pascal University Clermont-Ferrand (France)

In the area of multilingual communication and English as an Additional Language (EAL) writing (Flowerdew, 2013, Lillis & Curry, 2010), it is widely assumed today that a number of writing difficulties will be shared by a relatively homogenous cultural group of writers who share “similar educational, disciplinary, professional and sociocultural backgrounds, besides a common language” (Moreno, 2010: 66). Many studies which address such issues tend to approach the subject from a micro-level angle, in terms of making explicit English-language norms for EAL writers: lexical bundles, citation practices, pronoun usage, cohesion, anaphors/deixis or move analysis.

An alternate way of approaching the issue is through the lens of indexicality, whereby it can be demonstrated that the specific features which caracterize a culturally homogenous group of writers (e.g., native ‘French’ writers, native ‘German’ writers) in fact reflect that culture’s specific educational and learning environment.

This paper reports on results from an indexical study of the features of a learner corpus of French L1 scientific writing, using methods described in Dressen-Hammouda (2014). A corpus of 14 graduate student research papers from an M.A. program in technical communication and multilingual information design has been analyzed: 3 of the papers are written in English, 11 in French. The study sought to determine which features typical of school writing can be found in French L1 writing, whether in French or English.

This paper will first describe indexicality as a research tool, and will then compare the features of English argumentation and French argumentation, based on published literature. These features are compared to the learner corpus. The results of this study serve as a basis for better helping French L1 writers master the complexities of constructing appropriate texts in English.

Technical Communication Programmes: Building Competences Needed In the Workplace
Tytti Suojanen, University of Tampere (Finland)
Jenni Virtaluoto, University of Oulu (Finland)

Working life relevance has become a hot topic in the university world in recent years. On the one hand, the relevance of academic skills such as information retrieval or project management in working life settings has been emphasized. On the other hand, universities are increasingly seeking industry cooperation that would enhance and diversify students’ substance know-how within a specific field as well as develop their working life skills (e.g. Teichler 2015). Both of these dimensions require a clear definition of the overall competences within a given field.
In technical communication, efforts have been made to compile a generally and internationally recognized set of relevant skills and competences, such as the TCBOK (www.tcbok.org). The most recent effort, namely the TecCOMFrame project, will bring a much needed joint competence framework, which will undoubtedly help unify and raise the profile of technical communication programmes through Europe. However, we also need practical applications as to how these competences can actually be developed.


One important area in the building of professional skills and competences in technical communication is academia-industry cooperation. While many programmes are just learning about it, technical communication programmes are in an advantageous position: many of them have experience in this area, as the field has a basic practical orientation and academia-industry cooperation has been a much discussed topic from early on (e.g. Staples & Ornatowski 1997). Another core area is the choice of teaching methods that support the development of professional skills and competences.


In this presentation, we will first consider the general dimensions of working life relevance in academic studies. These include, for example, the transfer of knowledge to professional problem-solving and the innovation and reflective abilities of graduates (Teichler, 2015; Tynjälä et al., 2004). Second, we will take a look at how these dimensions relate to technical communication studies and the building of professional competences in the field. Third, we will provide selected practical examples of academia-industry cooperation (e.g. use of industry instructors, internships) and teaching methods that support the development of professional skills and competences (e.g. portfolios) from the universities of Oulu and Tampere in Finland. Both the benefits of these practices as well as challenges related to them will be discussed.