SciELO - Scientific Electronic Library Online

 
vol.53 índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


Stellenbosch Papers in Linguistics Plus (SPiL Plus)

versión On-line ISSN 2224-3380
versión impresa ISSN 1726-541X

SPiL plus (Online) vol.53  Stellenbosch  2017

http://dx.doi.org/10.5842/53-0-740 

ARTICLES

 

It may be copyrighted, but it still needs help: Improving research questionnaires by means of intralingual translation

 

 

Susan Lotz

Language Centre, Stellenbosch University, South Africa E-mail: slotz@sun.ac.za

 

 


ABSTRACT

Clear research questionnaires ultimately help to ensure the reliability and comparability of the data that they gather (Fowler 1992; Lenzner 2012; Moroney and Cameron 2016). This paper explores the intersection of best practices in the fields of questionnaire design and intralingual translation as a means to ensure clarity and comprehensibility in research questionnaires. The questionnaire design perspective on comprehensibility (as represented by the 2010, 2011 and 2012 studies by Lenzner and colleagues, and work done by Knáuper et al. (1997) and Krosnik (1991)) essentially requires intralingual translation for questionnaires that do not meet the clarity requirement. To illustrate how intralingual translation in the form of plain language practice can operationalise comprehensibility (Nisbeth Jensen 2015), a short case study is presented. It chronicles a case of interlingual translation that has evolved into an intralingual translation endeavour. A client had a copyrighted medical research questionnaire, originally in American English, translated into Afrikaans and isiXhosa. Initially, the language service provider was not allowed any interventions in the source text. Testing of this questionnaire and its translations then revealed that the questionnaires were incomprehensible to their respondents. In this paper, the intralingual interventions required to improve comprehensibility of the questionnaire are classified in terms of the four parameters that Zethsen (2009) has identified in this regard, namely knowledge, time, culture and space. In addition, a fourfold text assessment checklist for ensuring clarity in questionnaires is proposed. This checklist may prove valuable for highlighting areas in questionnaires that need intralingual translation -whether used as motivation for a client or as a starting point for an intralingual intervention itself.

Keywords: comprehensibility, intralingual translation, plain language, questionnaire design, questionnaires


 

 

1. Introduction

We ask questions to find things out. Accordingly, researchers use research questionnaires to gather information by asking respondents about their beliefs, values, attitudes and behaviours. Research questionnaires thus have a dual nature: as texts "destined for discourse" (Harkness and Schoua-Glusberg 1998:95) and as instruments of measurement (Moroney and Cameron 2016:10, 11).

Questionnaires could be regarded as a medium for a restricted question-and-answer dialogue (Jansen and Steehouder 2001:14), and it is imperative that respondents interpret the questions that they are asked in the same way that the researcher intended those questions to be understood. Without common understanding, bias can arise, which will threaten the validity of a questionnaire.

According to Van de Vijver (2013), bias is the primary methodological threat to valid inferences in comparative studies. He distinguishes between three sources of bias, namely (i) the construct under study, (ii) methodological aspects of an instrument or sample, and (iii) specific items. He groups translation issues, the inapplicability of item contents in some cultures and the use of words or expressions that could be ambiguous under the third factor that can introduce bias, namely specific items. From this, it follows that clarity in questionnaires ultimately contributes to the reliability and comparability of the gathered data (Fowler 1992; Lenzner 2012; Moroney and Cameron 2016).

There are various standards and principles for questionnaire design (cf. Belson 1981; Jansen et al. 1989; Tourangeau, Rips and Rasinski 2000; Saris and Gallhofer 2007); however, this paper will focus specifically on the requirement and achievement of clarity. Often, existing text requires intralingual translation: the rewriting of a text in the same language to improve clarity1(Nisbeth Jensen 2015).

In the literature review, I will explore how perspectives and best practices from the fields of questionnaire design, psycholinguistics, intralingual translation, plain language practice and comprehensibility all come into play when endeavouring to ensure clarity in a research questionnaire. Next, a short case study will be presented, relating how an interlingual translation assignment has evolved into an intralingual translation assignment due to the need for clarity in a research questionnaire. In the results and discussion section, I will discuss the interventions that the questionnaire required. Subsequently, I will propose a fourfold checklist to assist with ensuring that questionnaires are comprehensible and clear. The paper will conclude with a brief summary of the approaches that were explored, the case study and the checklist for ensuring clarity that I propose to help with future work in this regard.

 

2. Literature review

The fields of questionnaire design and intralingual translation intersect in that both strive for, among other things, comprehensibility. In this section, I will explore that intersection.

2.1. Questionnaire design

In their research on questionnaire design, Lenzner, Kaczmirek and Lenzner (2010) have identified seven text features in questionnaires that counterwork reading comprehension and thereby increase the cognitive burden imposed on readers. These problematic features are low-frequency words, vague or imprecise relative terms, vague or ambiguous noun phrases, complex syntax, complex logical structures, low syntactic redundancy and bridging inferences. Lenzner, Kaczmirek and Galesic (2011) extended the research on these problematic features using eye-tracking technology to examine word/phrase fixation times, question fixation counts and question fixation times while respondents answered two versions of similar questions in a web survey.

The authors found strong evidence that at least six of those features reduced question comprehensibility - only bridging inferences2 did not significantly undermine comprehensibility. They also found that the six features that did influence question comprehensibility did so irrespective of the type of question - that is, whether it was an attitudinal, factual or behavioural question.

Subsequently, Lenzner (2012) explored whether the cognitive effort required from a respondent to comprehend survey questions affected the data that they produced. He found that data quality was reduced if questions were hard to understand and if those questions required more processing effort than respondents were willing or able to invest. That finding ties in with Krosnick's (1991) satisficing theory, which suggests that particularly when respondents' motivation to work and think hard to answer questions optimally may be challenged, they change their response strategy. Instead of making the mental effort that is necessary to generate an optimal answer, they "compromise their standards and expend less energy instead" (1991:215). Consequently, they provide a satisfactory answer instead of an optimal one. This behaviour is called 'satisficing'. One of the ways to mitigate satisficing is to ensure that the question at hand is as comprehensible as possible - that it is clear, in other words - leaving more cognitive capacity available for the respondent to produce a high-quality answer.

In their research on the quality of the data gathered by means of questionnaires, Knáuper et al. (1997) derived nine question characteristics related to question difficulty from the literature on questionnaire design, namely question length,3 question complexity, instructions, introductory phrases, ambiguous terms, retrospective reports, frequency reports, quantitative reports and response scales. I would like to highlight two of those characteristics. Question complexity -that is, whether a sentence contains embedded sentences or inverted sentences - has a direct bearing on clarity. "Complex syntactical structures will tax the ability to apply the appropriate parsing and inference rules that are necessary to comprehend and understand the meaning of the question", according to Knáuper et al. (1997:187). Ambiguous and abstract terms have a direct influence on the clarity of a question in that respondents are required to derive the meaning of such terms. Comprehension problems can occur if respondents do not have the required frame of reference to draw from. Intralingual translation concerns itself with exactly this: the respondent or target audience's ability to make sense of what they read or encounter linguistically.

2.2. Intralingual translation

Intralingual translation - or "rewording", according to Jakobson (1959:233), the father of intralingual translation - entails the transfer of meaning within the same natural language, a concept that paradoxically expands the general notion of translation by limiting the activity to one natural language. A text could thus be rewritten in a different register or dialect but stays in the same language. The complexity of the text could be increased or decreased, depending on the needs of the target audience.

2.2.1 Four parameters

In order to describe the characteristics of and the microstrategies involved in intralingual translation, Zethsen (2009) analysed a set of Danish intralingual Bible translations and identified four parameters "that seem to be influential in intralingual translation" (2009:805), namely knowledge, time, culture and space. These parameters are not neatly demarcated and often overlap.

The knowledge parameter involves the target audience's ability to comprehend, for example their ability to understand a text, their background knowledge and their level of expertise concerning the subject at hand. Typical candidates for intralingual translation driven by the knowledge parameter are documents containing information that experts convey or explain to laypersons, such as medicine package inserts aimed at patients, manuals for appliances, informed consent forms and leaflets explaining new legislation. Children's versions of classical texts, such as the Children's Bible or opera stories for children, also fall in this category.

The time parameter calls for intralingual translation when the fact that a text and its audience are not from the same era creates comprehension difficulty at some level. This parameter is closely connected with the parameters of culture and knowledge, but it is often "the diachronic factor which results in the lack of knowledge or cultural understanding" (Zethsen 2009:806). An example of the diachronic factor driving intralingual translation is new versions of a classic text, such as the Bible: In English, the New International Version and the New Living Translation are good examples of texts aiming to accommodate and even attract a modern readership to ancient texts.

The parameter of culture allows for intralingual translation to explain cultural references in a text that may render that text inaccessible to an audience who cannot decode the cultural references in question. This parameter may go hand in hand with the parameters of time or background knowledge in the case of an ancient text, for example. Some biblical customs may seem quite foreign to modern readers, even if narrated in contemporary language, when those practices are not explained or given illuminating context. Another realisation of the parameter of culture occurs when an English book is reworked to produce an American version. Consider the case of the distinctly British Harry Potter books being published in a special American edition for which the publisher chose to substitute cultural words such as 'biscuits', 'football', 'Mummy', 'rounders' and 'sherbetlemons' with 'cookies', 'soccer', 'Mommy', 'baseball' and 'lemon drops' (Hatim and Munday 2004:4-5 cited in Zethsen 2009:807). Localisation is another incarnation of the parameter of culture in that the aim often is "to produce different cultural versions of the same text within the same language" (Zethsen 2009:807).

The parameter of space involves the reduction or extension of text - "the physical space of the text is changed" (Zethsen 2009:807). Shortened versions of classical texts such as easy readers, news reporting or subtitling for the deaf (Snell-Hornby 2006:21 cited in Zethsen 2009:807) are all instances of summarising by means of intralingual translation related to the parameter of space. Annotated publications are a good example of the extension of text that this parameter encompasses. I would like to suggest that the "physical space of the text" also includes its layout and visual organisation on the page. Many definitions of plain language practice indeed incorporate the layout of the text, among others those in Schriver (1991), Kimble (1996/7), Cheek (2010), Schriver and Gordon (2010), Cutts (2013) and Cornelius (2015). This brings us to the next concept to discuss: plain language.

2.2.2 Plain language practice

In South Africa, consumer protection legislation (the National Credit Act, No. 34 of 2005, and the Consumer Protection Act, No. 68 of 2008) requiring that plain language be used in contracts and related documentation has given plain language much-needed prominence. In the United States of America (USA), the Plain Writing Act of 2010 (H.R. 946; Pub.L. 111-274) has given plain language further momentum globally. Movements advocating plain language (or plain English, at the time) started in the late 1960s in the USA and the early 1970s in Britain already, both by consumer movements dissatisfied with linguistic obscurity in legal documents and government forms.

Plain language practice can be regarded as a form of intralingual translation to facilitate better understanding (Cornelius 2010; Nisbeth Jensen 2015). Plain language practice "involves all the techniques for clear communication - planning the document, designing it, organising it, writing clear sentences, using plain words, and testing the document whenever possible on typical readers", according to Kimble (1996/7:2); however, pinning plain language down in a neat definition for all contexts is not a simple task.

In an endeavour to put forward a standard definition of plain language, the International Plain Language Working Group has considered a large number of definitions of plain language. Following James (2008), it grouped those definitions to belong to one or more of three categories, namely (i) numerical or formula-based definitions, (ii) elements-focused definitions, and (iii) outcomes-focus definitions (Cheek 2010).

The first category of definitions is formula based.4 Word and sentence length, number of syllables, length of paragraphs and font size are examples of factors that are used to determine the readability and plainness of a document, usually by means of formulas. In this way, a relatively objective score can be allocated; however, the formulas are overly simplistic, could be misleading and do not give guidance on how to improve comprehensibility.

The second category is elements focused, with the emphasis on techniques to write clearly. Structure, design, content and vocabulary inform this approach, and it is likely to reflect a text's readability much better than a formula-based approach, although it would take more time and requires judgement and writing skill. Since it also provides guidance on improving writing and comprehensibility and can be tailored to different groups, this approach is the most useful of the three in the context of ensuring the clarity and comprehensibility of research questionnaires.

The third and last category of definitions is outcomes focused, and, alongside linguistic characteristics, it emphasises the consideration of visual elements making documents easy to read. It advocates testing documents to evaluate their usability, which can give specific input on improving a document and users' experience of a document. It is the hardest approach to use, and testing may be impractical in many cases. For research questionnaires, however, testing is paramount.

Although the International Plain Language Working Group acknowledges that all three types of definitions would ultimately play a role when evaluating a text to determine whether it is in plain language, the group leans towards the third option in its overall recommendation of a definition:

A communication is in plain language if it meets the needs of its audience - by using language, structure, and design so clearly and effectively that the audience has the best possible chance of readily finding what they need, understanding it, and using it.

Cheek (2010:5)

In practice, plain language practitioners thus apply specific strategies to make a complex text more accessible to its target audience, which range from ensuring that content is arranged logically to syntactical and lexical interventions - essentially following the elements-focused approach mentioned above. Preferring verb forms over nominalisations and active verbs over passive verbs, giving special attention to vocabulary, sentence structure and length, and optimising the design and layout of texts are practical examples of interventions for plain language. There are, however, no definite rules, 'recipes' or checklists to follow - the judgement of the plain language practitioner, informed by guidelines and experience, should determine the best course of action.

2.2.3 Comprehensibility

I use 'comprehensibility' in the same functionalist sense as Nisbeth Jensen (2015) chose to use it since the focus of this paper is on the comprehensibility of the text "in relation to its receiver" (2015:165). This approach resonates well with the plain language definition by the International Plain Language Working Group mentioned above. Comprehensibility is thus a quality of a text that depends on the degree to which its target audience finds the text understandable and that will change if the target audience changes. This approach also dovetails with the main evaluation criterion of the Karlsruhe comprehensibility model by Göpferich (2009), namely "whether a text fulfils its communicative function" (2009:49).

After having conducted a plain language literature review to identify the elements extensively cited to be detrimental to comprehensibility, Nisbeth Jensen (2015) created a comprehensibility framework. She found that specialised terminology, officialese, nominalisations, passive voice, compounds and synonymy were to be avoided when translating a text intralingually to improve comprehension. She subsequently confirmed the usefulness of plain language to optimise comprehensibility by applying her framework to the intra- and interlingual translation of patient information leaflets that accompany medicine.

Comprehensibility is relative, however: "What is plain to one reader may be incomprehensible to another, and irritatingly simplistic to a third" (Stewart 2010:67). Using plain language as a means to improve comprehensibility "can only lead to an increased likelihood that something is comprehensible; it can never be a guarantee" (Nisbeth Jensen 2015:185). Exactly because comprehensibility is so subjective, it is necessary to test particularly research questionnaires with representative audiences (Hartley 1988; Jansen et al. 1989; Jansen and Steehouder 2001).

The questionnaire design perspective on comprehensibility (as shown by the 2010, 2011 and 2012 studies by Lenzner and his colleagues, and the work done by Knáuper et al. (1997) and Krosnick (1991) that I have mentioned) essentially requires intralingual translation, specifically plain language practice, for questionnaires that do not meet the clarity requirement. Plain language practice is in fact a way of operationalising comprehensibility (Nisbeth Jensen 2015). An elements-based approach to plain language practice, of which the comprehensibility framework that Nisbeth Jensen suggests is an essential part, and the parameters of knowledge, time, culture and space that Zethsen (2009) has identified provide much-needed beacons when one is faced with a questionnaire that needs to be reworked at an intralingual level to be more comprehensible. The case study presented in the next section will illustrate this.

 

3. Case study

According to Susam-Sarajeva (2009:40), a case in translation studies could be

a unit of translation or interpreting-related activity, product, person, etc., in real life, which can only be studied or understood in the context in which it is embedded. A case can be anything from a translated text or author, translator/interpreter, etc., to a whole translation institution or source/receiving system.

The unit of translation-related activity of interest in this paper involves an allied health practitioner conducting doctoral research who approached a language service provider (LSP) to translate an American copyrighted medical questionnaire into Afrikaans and isiXhosa.5 The intended target audience was stroke survivors and their caregivers at home, who would administer the questionnaire in rural areas of the Western Cape province, South Africa. Seeing that South Africa is a developing country, it was probable that some respondents would not have completed secondary school education.

The source text (ST) questionnaire was an index involving a scale used to measure stroke survivors' performance in basic daily living activities, the Barthel Index. The ST was an excellent candidate for intralingual translation in that it required transformation concerning all four of Zethsen's (2009) parameters that motivate intralingual translation: knowledge, time, culture and space. However, due to the copyright and additional restrictions imposed by the international research trust holding the copyright, the client was unwilling to have the Index edited prior to translation - despite the LSP's advice before translation commenced, comments on inaccessibility by both translators during the translation process and the fact that the ST itself would also be used to collect data.

After having received the Afrikaans and isiXhosa translations, the client proceeded to test the original Index and the subsequent translations with a representative target audience. The target audience consisted of a small group of stroke survivor and caregiver pairs at a physical rehabilitation facility. Two representatives of the LSP observed the testing. In each instance, the caregiver was asked to complete the Index with input from the stroke survivor whom he or she was paired with. It was envisaged that the caregivers in the actual study would receive some training before administering the Index. It was therefore not surprising that caregivers who participated in the testing had some difficulty in administering the Index without having had an opportunity to become familiar with it before using it. However, the difficulty that they experienced exceeded the expected familiarity issues.

The layout and organisation of the Index, the word choices and the way in which the scoring was to be done proved to be challenging. The stroke survivors had difficulty to understand that their answers did not have to reflect their personal experience (since some of the questions were of a very personal nature) and that the testing was more about determining whether they understood the questions. In addition, they struggled with the meaning of some words in the Index and with scenarios that did not reflect their experience. All in all, the testing revealed that the Index was neither user-friendly nor comprehensible (Nisbeth Jensen 2015) to its target audience, thus confirming the LSP's concerns. This finally convinced the client that it was more important to ensure that her research instrument would be applied effectively to gather valid and reliable data than to try to satisfy the requirements of a research trust that was completely removed from the realities that she had to contend with in her research.

She subsequently agreed on intralingual translation of the ST, after which the improved ST -now a target text (TT) itself - was translated into Afrikaans and isiXhosa once again (due to space constraints, the interlingual translation aspect in this case study is not explored). The three TTs were tested, this time with positive results (the same two representatives of the LSP once again observed the testing). Different caregivers and patients participated in the second round of testing, sourced from the same physical rehabilitation facility. It was particularly interesting to see that the caregivers who participated in the second round of testing administered the improved Index much easier compared to the feedback from caregivers in the first round of testing. Many resources had been spent before this stage was reached.

 

4. Results and discussion

According to Oveisgharan et al. (2006), the Barthel Index has been used more than any other such measurement tool in stroke rehabilitation trials. The Index was developed by Mahoney and Barthel in 1965 to score stroke patients' ability to perform daily living activities. It had ten activity categories to be scored, each containing descriptions of different levels of ability (see Column 1 of Table 1 for the original categories). It consisted of just over 900 words.

The quality of an ST has a major impact on the quality of its translation. Further to the argument in the introduction of this paper, when a questionnaire is translated cross-culturally, it is very important to start off with a comprehensible ST to ensure that the data gathered by the ST and its TTs are reliable, valid and comparable (Lenzner 2012; Van de Vijver 2013; Dorer 2015). Despite this fact, the client initially felt that she could not make the necessary amendments to the questionnaire that she planned to use due to the restrictions that the research trust holding the copyright placed on that questionnaire.

The testing of the ST together with its initial TTs eventually proved to the client that the text failed in its communicative function (Göpferich 2009) in that the target audience did not understand what they had to do with the Index and how they had to score their patients. In addition, certain words or concepts created confusion and uncertainty, such as 'ambulate', 'maneuver', 'suspenders', 'loafer shoes', 'girdle', 'brassiere', 'sponge bath' and 'yards'. Problems were also caused by the fact that respondents did not necessarily have some of the facilities that the Index referred to in their homes, such as toilets or showers. Some respondents used buckets or commodes as toilets, or washed using basins, bowls or buckets. At this stage, the client was willing to reconsider upholding the restrictions imposed and consequently allowed intervention for clarity to ensure that the data that she would eventually be gathering with her questionnaire in the three languages would be reliable, valid and comparable.

A certain degree of expert-lay communication was applicable since the target group was mixed, consisting of stroke survivors and family caregivers. The target audience was further situated mostly in rural areas where education up to a certain level was not a given. In view of this, it may be surprising that relatively technical terms in the Index, such as 'transfer' and 'catheter', were not problematic. The reason for this was that the target audience had already had exposure to those concepts, being confronted with the daily necessities of moving or being moved from one place to another and/or using a catheter.

Intralingual translation of the original English ST required plain language practice, which entailed rewording and changing the sequence of the categories of the Index. The sequence of the categories was changed to ensure logical progression, and the categories were reworded to make the vocabulary more accessible to the target audience. Since the numbering of the categories competed with the scoring values on the questionnaire, the numbers were removed. Table 1 reflects the activity categories as they appeared in the ST in Column 1, the rewording done to make the activity categories more accessible in Column 2 and the improved sequence of the categories in Column 3.

Further intervention entailed adding clear instructions and anchors to help with scoring and record keeping, including pronouns of both genders ('he' > 'he or she'), removing content (in consultation with the researcher) that was not applicable to the specific target audience (e.g. deletion of 'sponge bath' and 'loafer shoes'), explicating where necessary ('shower' > 'shower, or wash using a basin, bowl or bucket'), localising American cultural references (yards > metres) and updating dated words ('brassiere' > 'bra').

Table 2 shows intralingual solutions offered for challenges in the ST and how those interventions could be classified according to the parameters (interventions regarding space included changes to the layout, which could unfortunately not be shown in the table):

The final TTs in this case study were an intralingual translation, the English TT, and two subsequent interlingual translations, the Afrikaans TT and the siXhosa TT. The three new TTs were tested once again, which yielded positive results. The respondents found the texts more comprehensible, and the caregivers were now able to use the Index to score their patients. This outcome is testimony to the value of the intralingual translation for clarity in this translation assignment. It also highlights the value of testing.

According to the client, the research trust did not respond to her enquiries about allowing interventions to the ST. This experience raises the question, What are researchers to do if they are dependent on a regulated measurement instrument but they have to amend it for comprehensibility reasons? It is understandable that validated instruments are protected to stay validated, as Juniper (2009) argues, but when that protection undermines the very essence of what the instrument was created to do in the first place, namely to gather valid and reliable data, this practice should be revisited. In this regard, Brislin (1986:150) states that "[to] obtain good translations, and thus good terms for data gathering, modifications of existing instruments often have to be made". In the same vein, Bracken and Barona (1991:121) remark that "because the translation of tests can be difficult, time consuming and inherently error prone, an instrument being considered for translation should be maximally useful, practical and error-free in its source language variation".

Moreover, when the target audience of a validated instrument changes, the validity of that instrument is also influenced. In other words, the context or testing situation influences the validation process significantly (D'Este 2012), and an instrument is only valid in situations for which it has been validated. The LSP could not find a solution to this delicate matter other than to motivate to the client why the changes made to the copyrighted questionnaire were imperative to ensure clarity and comprehensibility. Clarity and comprehensibility, in their turn, would ensure that the questionnaire gather valid and reliable data.

In Table 3, I propose a fourfold text assessment checklist that summarises the approaches discussed in the literature review. Used in this way, the approaches could be applied together since each of them addresses important aspects that influence clarity in questionnaires from different viewpoints.

The checklist is not meant as a substitute for the judgement of a plain language practitioner but is aimed rather at facilitating the assessment of a text. This checklist could, for example, be applied to highlight areas that need intralingual intervention when a questionnaire is assessed initially. In Table 3, the boxes that have been checked reflect an assessment of the ST in the case study, the Barthel Index.

In retrospect, carrying out an advance translation6 as Harkness and Schoua-Glusberg (1998:105) and Dorer (2015) propose or performing a translatability assessment7 (Conway, Acquadro and Patrick 2014) might have been good starting points for this translation assignment. The ST would have exposed itself as incomprehensible sooner, which would have saved the LSP and the client time and money. That way, the ST could have been improved before translation into isiXhosa or any back translations took place. However, the question remains whether clients would be prepared to pay for such precautionary steps before they personally experience that their measurement instruments fail. Closely linked with the translatability assessment is the option of applying the fourfold assessment checklist proposed in this section. The checklist could be applied to give clients insight into how a research questionnaire could be improved by means of intralingual translation - regardless of whether the initial request was for interlingual translation.

 

5. Conclusion

Clear research questionnaires ultimately help to ensure the reliability and comparability of the data that they gather (Fowler 1992; Lenzner 2012; Moroney and Cameron 2016). This paper explored the intersection of research on best practices in the fields of questionnaire design and intralingual translation as a means to ensure clarity in research questionnaires. A short case study was presented as an illustration.

On the front of questionnaire design, the 2010, 2011 and 2012 studies by Lenzner and his different colleagues confirming certain text features to undermine reading comprehension were surveyed, Krosnick's (1991) satisficing theory was touched on, and the question characteristics related to question difficulty identified by Knáuper et al. (1997) were considered. From an applied linguistics perspective, intralingual translation and particularly plain language practice as a vehicle to operationalise comprehensibility were explored. Research by Zethsen (2009), Cheek (2010), Nisbeth Jensen (2015) and Göpferich (2009) informed that discussion.

Subsequently, I presented a case study illustrating how an interlingual translation assignment had evolved into an intralingual translation assignment precisely because of the need for clarity and comprehensibility in a research questionnaire. The value of the intralingual translation performed in this case study was far-reaching. Had it not been done, the client would not have had a usable measurement instrument in any of the three languages in which she wished to work to gather data. Although the research trust did not wish to allow interventions in the ST -supposedly to preserve the instrument's validity and reliability (cf. Juniper 2009) - the very fact that the context in which this instrument would be administered had changed implied that the original validation of the instrument was no longer applicable (cf. D'Este 2012). The developers of measurement instruments, Meier (2008:124) argues, assume that respondents "understand items similarly and in the manner intended by the test developer". Such shared contexts "could become a source of invalidity, however, when [they] function in a manner contrary to the test's intended purpose". If respondents do not understand what is being asked of them, the measurement instrument has surely not fulfilled its purpose.

By incorporating the research discussed in the literature review and combining it with the insight that this case study brought, I proposed a fourfold text assessment checklist for ensuring clarity in questionnaires. This checklist is of practical valuable in that it could be applied to assess a text to highlight areas that need intralingual intervention. Such assessments could be used to motivate to clients why intralingual translation of their research questionnaires are advisable, and it could be applied by language practitioners as a starting point to map the intralingual intervention that a text requires.

 

Acknowledgements

I would like to thank the following colleagues for their involvement in this project: Elsje Scheffler, Cobus Snyman and Liezl van Zyl.

I would also like to thank Prof Leon de Stadler for allowing a space where this and other research could take place.

 

References

Belson, W.A. 1981. The design and understanding of survey questions. London: Gower.         [ Links ]

Bracken, B.A. and A. Barona. 1991. State of the art procedures for translating, validating and using psychoeducational tests in cross-cultural assessment. School Psychology International 12(1-2): 119-132. doi:10.1177/0143034391121010        [ Links ]

Brislin, R.W. 1986. The wording and translation of research instruments. In W.J. Lonner and J.W. Berry (eds.) Field methods in cross-cultural research. Beverly Hills: Sage. pp. 137-164.         [ Links ]

Cheek, A. 2010. Defining plain language. Clarity 64: 5-15.         [ Links ]

Conway K., C. Acquadro and D.L. Patrick. 2014. Usefulness of translatability assessment: Results from a retrospective study. Quality of Life Research 23(4): 1199-1210.         [ Links ]

Cornelius, E. 2010. Plain language as alternative textualisation. Southern African Linguistics and Applied Language Studies 28(2): 171-183.         [ Links ]

Cornelius, E. 2015. Defining 'plain language' in contemporary South Africa. Stellenbosch Papers in Linguistics 44: 1-18.         [ Links ]

Cutts, M. 2013. Oxford guide to plain English. 4th ed. Oxford: Oxford University Press.         [ Links ]

D'Este, C. 2012. New views of validity in language testing. Educazione Linguistica Language Education 1(1): 49-63.         [ Links ]

Dorer, B. 2015. Carrying out 'advance translations' to detect comprehensibility problems in a source questionnaire of a cross-national survey. In K. Maksymski, S. Gutermuth and S. Hansen-Schirra (eds.) Translation and comprehensibility. Berlin: Frank and Timme. pp. 77-112.         [ Links ]

Fowler, F.J. 1992. How unclear terms affect survey data. Public Opinion Quarterly 56(2): 218-231. doi:10.1086/269312        [ Links ]

Göpferich, S. 2009. Comprehensibility assessment using the Karlsruhe Comprehensibility Concept. Journal of Specialised Translation 11: 31-52.         [ Links ]

Harkness, J.A. and A. Schoua-Glusberg. 1998. Questionnaires in translation. In J.A. Harkness (ed.) Cross-cultural survey equivalence. ZUMA-Nachrichten Spezial Band 3. Mannheim: ZUMA. pp. 87-126.         [ Links ]

Hartley, J. 1988. Designing instructional text. London: Kogan Page.         [ Links ]

Jakobson, R. 1959. On linguistic aspects of translation. In R.A. Brower (ed.) On translation. Cambridge: Harvard University Press. pp. 232-239.         [ Links ]

James, N. 2008. Defining the profession: Placing plain language in the field of communication. Paper read at the Third International Clarity Conference, 20-23 November 2008, Mexico City, Mexico.

Jansen, C. and M. Steehouder. 2001. How research can lead to better government forms. In D. Janssen and R. Neutelings (eds.) Reading and writing government documents. Amsterdam: Benjamins. pp. 11-36.         [ Links ]

Jansen, C., M. Steehouder, K. Edens, J. Mulder, H. Pander Maat and P. Slot. 1989. Formulierenwijzer: Handboek voor het redigeren van formulieren. Den Haag: Sdu Publishers.         [ Links ]

Juniper, E.F. 2009. Validated questionnaires should not be modified. European Respiratory Journal 34(5): 1015-1017.         [ Links ]

Kimble, J. 1996/7. Writing for dollars, writing to please. Scribes Journal of Legal Writing 6: 1-38.         [ Links ]

Knáuper, B., R.F. Belli, D.H. Hill and A.R. Herzog. 1997. Question difficulty and respondents' cognitive ability: The effect on data quality. Journal of Official Statistics 13(2): 181-199.         [ Links ]

Krosnick, J.A. 1991. Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology 5(3): 213-236.         [ Links ]

Lenzner, T. 2012. Effects of survey question comprehensibility on response quality. Field Methods 24(4): 409-428.         [ Links ]

Lenzner, T., L. Kaczmirek and M. Galesic. 2011. Seeing through the eyes of the respondent: An eye-tracking study on survey question comprehension. International Journal of Public Opinion Research 23(3): 361-373.         [ Links ]

Lenzner, T., L. Kaczmirek and A. Lenzner. 2010. Cognitive burden of survey questions and response times: A psycholinguistic experiment. Applied Cognitive Psychology 24(7): 1003-1020.         [ Links ]

Meier, S.T. 2008. Measuring change in counseling and psychotherapy. New York: Guilford Press.         [ Links ]

Moroney, W.F. and J. Cameron. 2016. The questionnaire as conversation: Time for a paradigm shift, or at least a paradigm nudge? Ergonomics in Design 24(2): 10-14.         [ Links ]

Nisbeth Jensen, M. 2015. Optimising comprehensibility in interlingual translation: The need for intralingual translation. In K. Maksymski, S. Gutermuth and S. Hansen-Schirra (eds.) Translation and comprehensibility. Berlin: Frank and Timme. pp. 163-194.         [ Links ]

Oveisgharan, S., S. Shirani, A. Ghorbani, A. Soltanzade, A. Baghaei, S. Hosseini and N. Sarrafzadegan. 2006. Barthel Index in a middle-east country: Translation, validity and reliability. Cerebrovascular Diseases 22(5-6): 350-354. doi:10.1159/000094850        [ Links ]

Saris, W.E. and I.N. Gallhofer. 2007. Design, evaluation, and analysis of questionnaires for survey research. Hoboken, NJ: Wiley.         [ Links ]

Schriver, K. 1991. Plain language for expert or lay audiences: Designing text using protocol-aided revision. Technical Report No. 46. Pittsburgh: Centre for the Study of Writing, Berkeley, CA.         [ Links ]

Schriver, K. and Gordon, F. 2010. Grounding plain language in research. Clarity 64: 33-39.         [ Links ]

Stewart, J. 2010. Plain language. From 'movement' to 'profession'. Australian Journal of Communication 37(2): 51-72.         [ Links ]

Susam-Sarajeva, S. 2009. The case study research method in translation studies. The Interpreter and Translator Trainer 3(1): 37-56. doi:10.1080/1750399X.2009.10798780        [ Links ]

Tourangeau, R., L.J. Rips and K. Rasinski. 2000. The psychology of survey response. Cambridge: Cambridge University Press.         [ Links ]

Van de Vijver, F.J.R. 2013. Contributions of internationalization to psychology: Toward a global and inclusive discipline. American Psychologist 68(8): 761-770. doi: 10.1037/a0033762        [ Links ]

Zethsen, K.K. 2009. Intralingual translation: An attempt at description. Meta: Translators' Journal 54(4): 795-812.         [ Links ]

 

 

1 While intralingual translation takes place in the same language, /'wferlingual translation is the transfer of meaning from one language to another, for example translation from English to isiXhosa.
2 A reader needs to draw a bridging inference to make sense of what is being asked if the question to be answered is preceded by an introductory sentence and if information from both sources needs to be connected to produce an answer.
3 Longer questions have been shown to make a question easier to answer - if questions contain redundant information but not if new terms are introduced or if the question becomes syntactically complex due to the length.
4 Editorial note: Read more on readability formulas in the article by Jansen, Richards and Van Zyl in this volume.
5 Afrikaans and isiXhosa are two of the 11 official languages of South Africa.
6 An advance translation takes place before the ST is finalised to reveal problems in the ST: "Experience has shown that many translation problems linked with ST formulations only become apparent, even to experienced cross-cultural researchers, if a translation is attempted" (Harkness and Schoua-Glusberg 1998:105).
7 A translatability assessment is an evaluation of the extent to which a measurement instrument can be meaningfully translated into another language. Such an assessment could help to identify alternative formulations for translation purposes, help to modify original formulations to optimise subsequent translation efforts and help to detect irrelevant or inappropriate items early in the process (Conway et al. 2014).

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons