1. School of Humanities, Beijing University of Posts and Telecommunications, Beijing; 2. School of Foreign Studies, Chang’an University, Xi’an
Technical translation imposes stringent demands for terminological precision, logical coherence, and scientific fidelity, while simultaneously prioritizing target-text readability to facilitate knowledge dissemination. As a result, technical translators may rely heavily on instrumental competence or search ability to verify terminology and ensure the correctness of scientific facts. Generative Artificial Intelligence (GenAI) emerged as a helpful tool for translators. The whole society is calling for improvement in AI literacy. The concept should also adapt to different industries. Previous studies on AI literacy have proposed several frameworks (Chee, et al., 2024; Ng, et al., 2021; UNESCO, 2024), albeit with differences, all of which focus on understanding the fundamental knowledge of AI, AI tool applications, and AI ethics. When AI literacy is applied to the translation industry, utilizing AI for domain-specific tasks and aligning AI literacy with Machine Translation (MT) literacy and data literacy is crucial (Krüger, 2024). However, there is a scarcity of research investigating student translators’ actual AI usage and its effect on the quality of technical translation. This study aims to explore student translators’ AI and Neural Machine Translation (NMT) usage, and the impact of this tool usage on technical translation quality through qualitative and quantitative methods. The study aims to contribute to the description of student translators’ AI/NMT usage patterns through qualitative analysis and to quantify the correlation between tool usage and translation quality, thereby providing insights for translation pedagogy in the age of AI.
AI literacy has been widely discussed in recent years. Almost all studies on AI literacy conceptualize it as a multidimensional competency. The frameworks proposed have different target groups but with converging elements. UNESCO proposed a general framework that should be adapted to specific usage, comprising a human-centered mindset, ethics of AI, AI techniques and applications, and AI system design. Annapureddy et al. (2025) define 12 sequential competencies for Generative AI literacy, ranging from basic AI knowledge through technical skills to contextual/legal understanding. Their model is intentionally progressive, starting with foundational AI concepts and tool use, then progressing to content assessment, prompt engineering, and programming, and finally encompassing contextual, ethical, and legal competencies. Chee et al. (2024) synthesize 29 studies to propose eight overarching AI literacy competencies, comprising 18 sub-competencies that vary by learner group. These broad competencies encompass Fundamental AI Knowledge (concepts, data, and algorithms), Computational and Problem-Solving Skills, AI Tool Usage and Device Literacy, Ethics and Society, and Career and Application Skills. The eight categories (with examples) are applied differently for K–12, higher education, and workforce learners, reflecting a staged learning pathway.
Nearly all frameworks mentioned fundamental knowledge of AI, AI tool usage, and AI ethics. Some frameworks have higher requirements for creating AI systems. For specific target groups, the content of AI literacy can be tailored to meet their needs. When it comes to translators’ AI literacy, Krüger’s (2024) framework is structured into five main dimensions tailored to the language industry: Technical Foundations, Domain-Specific Performance, Interaction, Implementation, and Ethical/Societal Aspects. His framework uniquely integrates machine translation, data literacy, and AI literacy for the language industry. Its dimensions, such as technical foundations and domain performance, reflect industry-specific needs not found in general frameworks. It emphasizes understanding NMT architectures and avoiding machine circularities, which uses machines to create, translate, and evaluate texts without humans in the loop.
Several methods can be used to assess AI literacy. Quantitative methods include knowledge tests and surveys that measure confidence and perceived abilities. Qualitative methods include project portfolios, interviews, and observations of AI interactions (Ng, et al., 2021). This study uses a different quantitative method to measure AI literacy, that is, the frequency of AI usage. This is primarily due to the purpose of this study. It aims to examine the actual usage of AI by translators and its impact on translation quality. Also, this measure aligns with “use and apply” competency, as Ng et al. (2021) identify applying AI in real-world contexts as a core dimension of AI literacy. Frequency of usage directly captures this behavioral competency, reflecting translators’ operational proficiency beyond theoretical knowledge. Third, the measurement is a domain-specific application of AI literacy that is encouraged by UNESCO, Ng et al. (2021), and Krüger (2024). This study also looks into specific application skills and problem-solving skills. It examines how translators use AI tools to solve specific scientific translation problems.
AI literacy can be incorporated into the instrumental competence in PACTES’ translation competence model. The impact of instrumental competence on translation quality has been widely studied, showing a positive effect (Mohammed & Al-Sowaidi, 2023). From the translator’s perspective, GenAI can be seen as an upgrade of instruments to help them perform the translation task. As a result, we hypothesize that higher AI literacy would improve translation quality.
Scientific and technical translation faces core linguistic and terminology challenges. Studies consistently highlight the difficulty in finding accurate equivalents, especially across disparate linguistic systems like English and Arabic (Al-Quran, 2011), Farsi (Talebinejad, et al., 2012) and Greek (Christidou, 2018). Pedagogically, Sharkas (2013) found that trainee translators with a limited scientific background produced significantly more accurate translations after receiving targeted subject knowledge, underscoring the value of domain familiarity. Petts et al. (2024) argued for more convergence between Technical and Professional Communications and Translation studies. Among the five themes they developed, they highlight that technical translation must account for cultural and rhetorical differences to ensure clarity and usability across locales. This need for specialized knowledge extends to the effective use of emerging technologies. Zhang et al. (2025) found that while students recognize that GenAI benefits efficiency and quality, they also report challenges directly tied to domain and instrumental competence, including judging output adequacy, crafting effective prompts, and overcoming technical limitations of the tools.
Despite the evident reliance of technical translators on external resources and strategic competencies, little research has explicitly investigated how instrumental competence (e.g., advanced information searching, resource evaluation) directly influences technical translation quality. This gap persists even though translators clearly need these skills to navigate complex terminology (as shown by Talebinejad et al. and Al-Quran), comprehend source material sufficiently to make explicitation choices appropriate to the “textual degree of technicality” (Krüger, 2016), effectively utilize GenAI tools (Zhang, et al., 2025).
The present study aims to answer the following questions: (1) How does AI literacy influence the quality of technical translations? (2) What is the tool usage pattern of participants when translating conceptual technical passage (CTP) and operational technical passage (OTP)? (3) How do student translators perceive the help of AI in technical translations?
A group of student translators was invited to translate two original technical passages from English into Chinese. They could use any kind of translation tools accessible through a browser and an internet connection. The experiment included two conditions: one CTP and one OTP. Multiple dimensions of quantitative data, including AI usage frequency and NMT usage frequency, and scores of the products, were collected to answer research questions 1 and 2. Inputlog and screen recording were used to record the translation process. Qualitative data on think-aloud protocols and interviews were collected to answer research question 3.
For research question 1, we hypothesized that the higher the AI literacy, the higher the translation quality for technical texts, as LLMs are perceived to outperform NMT in translation quality, and can also provide domain-knowledge support and enhance writing. Participants with higher AI literacy might know better how to use multiple tools to improve their translation quality. For research question 2, we hypothesized that students have higher tool reliance in operational passages than in conceptual passages, since it is generally believed that machine translation tools are better at translating less abstract but more operational texts (Salimi, 2014). Also, our selected operational technical passage has a higher density of specialized terms, which increases the difficulty of acquiring the necessary domain knowledge. The task can seem daunting at first sight for a translator. A machine-translated first version might reduce their pressure. For research question 3, we hypothesized that student translators perceived AI as a helpful tool in translating technical text, as it can provide background information.
Table 1 Hypothesis
RQ |
Number |
Description |
RQ 1 |
Hypothesis 1 |
Higher AI literacy correlates with higher translation quality of technical texts. |
RQ 2 |
Hypothesis 2 |
Operational technical texts lead to greater reliance on AI tools than conceptual technical texts. |
RQ 3 |
Hypothesis 3 |
Participants with higher AI literacy perceive AI tools as more effective in improving translation quality. |
Twenty-nine participants (mean age = 25.5, 25 female, four male) were recruited from translation master’s programs at universities in Beijing and Xi’an. All participants provided signed informed consent and received reimbursement for their time and effort. They were in their first or second year of study. According to the questionnaire, 37.9% of respondents reported having undergone targeted training in technical translation. Their native language (L1) was Chinese, and their foreign language (L2) was English. The program from which participants were recruited is highly competitive, with admission contingent on rigorous testing of English proficiency, translation skills, Chinese knowledge, and ad-hoc knowledge. This selective admission process ensured a cohort with homogeneous baseline competencies in translation and language skills, thereby controlling for these variables in the study design.
Translation products were rated by two qualified raters following MQM. Due to the nature of our material, we selected part of the error types, including terminology, accuracy (mistranslation, addition, omission), linguistic conventions (punctuation, unintelligible), and style (register, inconsistent style). The raters were trained about the usage of the MQM scorecard, error types, and penalty levels. After a pilot rating of three products, the raters and the research team discussed the results and reached an agreement.
Two passages from Thinking in Java (Eckel, 2006) were selected for the experiment. The first passage, titled “The Progress of Abstraction” (202 words), discusses the philosophical underpinnings of programming design, while the second, “Order of Constructor Calls” (197 words), focuses on practical implementation rules. To assess textual complexity, four linguistic metrics were computed using Python’s NLTK package. Table 2 shows the comparison of textual complexity between the two passages.
Table 2 Textual Complexity of the Two Passages
Passage 1 |
Passage 2 |
|
Type-token ratio (TTR) |
0.46 |
0.51 |
Lexical density |
0.53 |
0.56 |
Main clause length (words) |
12.1 |
12.1 |
Mean sentence length (words) |
30.13 |
22.6 |
Stylistically, Passage 1 employs an explanatory and persuasive tone, discussing high-level design concepts with relatively few technical terms. In contrast, Passage 2 adopts an instructive and descriptive style, featuring more domain-specific terminology to detail concrete programming rules. Based on these differences, the passages were categorized as Conceptual Technical Passage (CTP) for passage one and Operational Technical Passage (OTP) for passage 2.
The experiment was conducted in translation labs at universities in Beijing and Xi’an of the People’s Republic of China. Participants first completed a demographic survey, followed by a 3-minute training session on the think-aloud protocol. The entire session lasted between 45 and 80 minutes. During the task, participants translated source texts directly in a Microsoft Word document. Inputlog software (Leijten & Van Waes, 2013) recorded their keystrokes and mouse activity. Their entire process was also recorded by the screen recording software EVCapture. They had unrestricted internet access and were explicitly permitted to use any digital tools, including generative AI (e.g., Ernie Bot), machine translation engines (e.g., DeepL), or other web-based resources. No time constraints were imposed. Participants could submit their translations once they deemed the quality satisfactory.
The quantitative data comprised three key metrics: AI literacy indicators, machine translation usage frequency, and the scores of the two translated passages. AI literacy was operationalized through the frequency of AI tool usage (Albir, et al., 2020). Each instance of AI tool engagement was recorded as follows: if a participant accessed an AI tool and pasted source text into the query textbox, it was counted as one usage event. Similarly, each AI-generated query was logged as an additional usage event. Participants who did not use any AI tools were assigned a value of zero for both metrics. Tool usage frequency can show the variety of searches, which serves as an indicator of better results (Kuznik & Olalla-Soler, 2018). In the retrospective interview, we asked about the daily usage of AI in translation. However, we did not use participants’ answers in this question as a measure for their AI literacy, as we hold that the actual usage behaviour is more convincing than self-reported interview (Carrell & Willmington, 1996; Mathieson, et al., 2009). All statistical analyses were conducted using SPSS 27 (IBM Corp., 2020). The qualitative data consisted of transcribed interview responses. These data were subjected to thematic analysis (Braun & Clarke, 2012), with codes derived inductively from the participants’ interviews.
Tool usage behavior is consistent between CTP and OTP. A change in textual style did not cause participants to change their tool usage. As shown in Table 3, when translating CTP, 8 participants used AI (AI group), while the other 21 did not (non-AI group). AI uptake for CTP is 27.6%. When translating OTP, 9 participants have used AI, while 20 have not. AI uptake for OTP is 31%. More participants turned to AI for help when translating OTP. Table 4 presents the distribution of participants’ usage patterns among different tools.
Table 3 AI and NMT Frequency
CTP AI frequency |
CTP NMT frequency |
OTP AI frequency |
OTP NMT frequency |
|
A01 |
0 |
11 |
0 |
8 |
A02 |
0 |
12 |
2 |
14 |
A03 |
7 |
1 |
2 |
2 |
A04 |
0 |
7 |
0 |
10 |
A05 |
0 |
3 |
0 |
4 |
A06 |
0 |
13 |
0 |
20 |
A07 |
0 |
8 |
0 |
13 |
A08 |
0 |
0 |
0 |
7 |
A09 |
0 |
2 |
0 |
1 |
A10 |
0 |
9 |
0 |
11 |
A11 |
14 |
0 |
8 |
0 |
A12 |
0 |
0 |
1 |
0 |
A13 |
2 |
4 |
10 |
8 |
A14 |
0 |
3 |
0 |
2 |
A15 |
0 |
1 |
0 |
1 |
A16 |
0 |
0 |
0 |
0 |
A17 |
0 |
5 |
0 |
2 |
A18 |
0 |
0 |
0 |
0 |
A19 |
1 |
1 |
1 |
0 |
A21 |
4 |
12 |
6 |
2 |
A22 |
6 |
5 |
1 |
3 |
A23 |
0 |
3 |
0 |
5 |
A24 |
0 |
7 |
0 |
12 |
A25 |
0 |
6 |
0 |
5 |
A26 |
0 |
12 |
0 |
7 |
A27 |
0 |
11 |
0 |
12 |
A28 |
0 |
7 |
0 |
16 |
A29 |
0 |
28 |
0 |
22 |
A30 |
5 |
1 |
11 |
1 |
Table 4 Participants’ Tool Usage Pattern
CTP |
OTP |
|
AI only |
1 |
3 |
AI+NMT |
7 |
6 |
NMT only |
18 |
18 |
Zero tool |
3 |
2 |
The specific tools used by participants were categorized into four groups: AI models, NMT tools, webpages, and dictionaries. Table 5 presents the specific tool used and their frequencies. The most frequently used AI tool was Ernie Bot, followed by Kimi. NMT usage showed a bigger variety; eight engines were used in total, with DeepL as the most popular one. Participants also frequently consult webpages. CSDN blogs were the most accessed, followed by Zhihu and Baidu Encyclopedia. Comparing different categories, it is obvious that AI models present less variety. Participants resort more to NMT engines and webpages to solve their translation problems.
Table 5 Specific Tool Usage Category
Tools |
|
AI models |
Ernie Bot (8), Kimi (2) |
NMT |
DeepL (11), Baidu Translate (6), Youdao Fanyi (6), Microsoft Translator (2), Bing Translate (1), 360 translator (1), Google Translate (1), Sogou Translator (1) |
Webpage |
Baidu Encyclopedia (6), CSDN blogs (10), Zhihu (8), MDN (1), W3school (1), Github (1), Tencent Cloud (1), GeeksForGeeks (1), Britannica Encyclopedia (1) |
Dictionary |
Youdao (5), Cambridge (4), Longman (2), Bing (1), iciba (1), Linguee (1), Collins (1). |
As our experiment was conducted between April and June 2024, Youdao Fanyi did not incorporate AI tools into their translator webpage. We consider participants who have used Youdao Fanyi as NMT usage, instead of AI usage.
According to table 6, The minimum score of CTP is 62.54. The maximum score of this passage is 93.06. It has a much smaller standard deviation than the score of OTP. This is because 3 participants have achieved an abnormally low score on OTP. We used the Interquartile Range (IQR) to calculate outliers. IQR measures the spread of the middle 50% of the data. Values falling outside this range are flagged as potential outliers. Then, 20, 24, and 44 were detected as outliers in the score of OTP. This explains the significant standard deviation of OTP scores. Detailed descriptive statistics can be found in Table 6.
Table 6 Participants’ Scores of CTP and OTP
N |
Minimum |
Maximum |
Mean |
Std. Deviation |
|
CTP |
29 |
62.540 |
93.06 |
79.34 |
9.31 |
OTP |
29 |
20.10 |
94.93 |
78.43 |
19.89 |
Paired-sample T-test (Mean= -3.49, SD= 7.25, p=0.02) showed that there were significant differences between the mean scores of CTP and OTP after outliers were deleted. OTP’s score is significantly higher than that of CTP.
The AI group scored (M=87.54) significantly higher (p=0.03) than the non-AI group (M=77.63), signifying that AI usage can improve the quality of CTP. Hypothesis 1 is thus accepted. No significant differences (p=0.3) were found for OTP between the scores of the AI group and those of the non-AI group. Hypothesis 2 is thus rejected. Among the 8 AI users in OTP, five have only used AI less than twice. They only used AI as a machine translation engine. When their AI tools provided a draft translation, they moved on to post-editing. They did not use AI for referencing or improving language. This signifies that their AI literacy was still limited.
Qualitative data showed that when comparing AI as tools for assisting translation, participants who have used AI for the experiment reported a higher trust in AI models. They reported that AI tools can produce more accurate translations (A21) and more accurate terminology (A17), have user-friendly interactions (A25, A26, A29), are better at translation tasks that the model is good at (A28), and can compensate for the lack of domain knowledge (A13, A16). However, their limitations include inconsistent summarization capabilities, occasional illogical outputs (A24), and a reliance on detailed user instructions to generate contextually appropriate translations. Additionally, LLMs often struggle with colloquial expressions and may generate unverified or nonsensical content without clear guidance (A18). LLMs’ effectiveness depends on the user’s ability to craft precise prompts (A17) and cross-validate results across multiple models (A16). Based on the overall result of the qualitative data, Hypothesis 4 can be accepted.
Our semantic analysis performed on interview data generated three themes. Table 7 shows the themes and their direct quotes.
Table 7 Themes and Quotes from Qualitative Data
Theme 1: AI is seen as convenient for translation and information retrieval, but its inconsistent output requires careful human editing. |
Theme 2: The effectiveness and quality of AI translation depend on the model, subject matter, and user strategies. |
Theme 3: AI aids domain knowledge gaps with research and terminology, but cannot replace deep expertise, especially in advanced fields |
One advantage of LLMs is convenience and speed. As long as you give it a command, it will parse your question... However, if you call a memory base or an electronic dictionary, it is relatively rigid. You ask something, and you get something. But sometimes, there might be some explanations, some elaborations that help you understand better (A2) |
I have used Ernie Bot and ChatGLM. I initially preferred using Ernie Bot, but later it started charging, so I stopped. And for ChatGLM, I feel it sometimes fabricates things more. The current 4.0 version of Ernie Bot is slightly better than the 3.5 version... But for this kind of text, it feels okay, because for this kind of programming text, on the one hand, those large language models indeed perform quite well in this area... (A24) |
Since it’s better than Baidu and can provide more information, I no longer use traditional search methods after using Large Language Models. (A11) I think computers do understand some information. This is a special professional term. When it extends to other meanings, it may not necessarily understand. ... It only knows some words, but doesn’t know how to check reality. (A19) |
We specifically instructed students to use any tools they found useful to complete the translation task, with the goal of producing satisfactory target texts. Despite this flexibility, the uptake of AI tools for the CTP task was only 27.6%, indicating that MTI students generally do not rely heavily on AI tools for translation or information retrieval. This suggests a low level of integration of AI into their translation practices.
Among the students who did use AI tools, the range of tools was quite limited. Only Ernie Bot and Kimi were used, reflecting a lack of diversity in AI tool selection. Moreover, most students used AI tools primarily for direct translation rather than for broader language support. For instance, they rarely employed AI to explain background knowledge, suggest alternative expressions, or improve the overall quality of the text. The typical workflow involved obtaining a raw target text from AI tools and then engaging in post-editing to refine the output. When faced with gaps or uncertainties, students would often consult external sources, such as webpages, to validate information or improve accuracy. This pattern suggests that students perceive AI tools mainly as enhanced machine translation engines rather than as comprehensive language assistance platforms. This perception is further supported by A21, which described AI as essentially a more powerful version of traditional machine translation.
In the qualitative data, we observed a disparity between participants’ actual AI use and their reported daily AI usage. Several participants did not use any AI tool but reported that they often use ChatGPT in their daily translation work.
Participants with high AI literacy used AI tools and NMT at the same time. They actively compare the translation production of AI tools and NMT and select the best version of each sentence. Here are two examples from A22 and A25 illustrating how they used AI tools in the experiment.
“For this time, I used Ernie Bot and DeepL. Compared to DeepL’s one-time solution, AI tools have the advantage of fine-tuning translation, so that it can give you better versions according to your prompts.” (A22)
“Sometimes the versions provided by AI have not actually improved. They claimed that the new versions were better. But if you evaluate them closely, you would find out that it is not the case.” (A25)
It can be observed that AI literacy enables more effective tool usage. Users who understand AI capabilities and limitations can strategically combine tools for better outcomes. Also, human oversight remains crucial in translation.
For CTP, AI literacy is correlated with translation quality. But not for OTP. A case study of A3, who is considered high in AI literacy, might reveal the reason for the differences in correlation. Big differences are found in the A3’s behaviors when translating the two passages. He rarely used AI to perform the machine translation task. Rather, he used Ernie Bot to assist his comprehension of the original text; occasionally, to improve the target text generated by himself and translate terms. For example, he asked Ernie Bot, “编程语言的抽象是什么(What does abstraction mean in programming language?)”He was trying to make sure whether abstraction can be directly translated into “抽象”. For the same function, he asked Ernie bot what FORTRAN, BASIC, and C are in the context of programming languages. Figure 1 is a screenshot of his interaction with Ernie Bot. He has set a scenario for AI to explain the terms. In the retrospective interview, he mentioned the importance of setting scenarios when using AI and a trusted AI’s answer delivered under the set scenario.
He first generated the translation himself in the sentence “Many so-called imperative language that followed...” He struggled with the meaning of “that followed”. His first version was “许多在编程语言基础上的所谓的‘指令性’语言都是对编程语言的抽象形式(Many so-called ‘imperative’ languages based on programming languages are abstract forms of programming languages.)”. When he came back to revise this sentence after two paragraphs were all translated, he was not so sure about translating “imperative” into “指令性”, so he asked Ernie Bot for suggestions. Ernie Bot returned that it should be “命令式”. He happily accepted the solution and changed his translation.
Figure 1 Screenshot of A3’s AI interaction
However, when he first took a look at the OTP, he directly said in his think-aloud protocol that he could not understand this passage and would let AI translate it as a whole. From the screen recording, it is observed that he did not change too much of the AI-generated translation. He spent much time reading and comprehending each term and sentence of OTP to assess the accuracy of this AI-generated translation. The behavior differences show that A3 did not trust AI to translate the CTP. He probably believed that, as a human translator, he could perform the best in translating more conceptual, professional work. Or at least, CTP seemed more comprehensible at first glance. It did not induce him to use AI to translate directly. Human factor is larger in CTP than OTP. Compared to the group who use NMT only, AI literacy can help to improve understanding, term accuracy, and style, which brings significant quality improvements.
The result that AI literacy is not correlated with translation quality of CTP can also be explained from the perspective of whether general LLMs improved or not compared to non-specialized NMT. Previous literature showed mixed results. LLMs demonstrated strength in source text comprehension and target language style (Mohsen, 2024). Google Translate performed better than LLMs when translating English scientific texts into Arabic (Alzain, et al., 2024). As a result, it remained unknown whether the translation quality of general LLMs in technological texts improved or not. But the behavior pattern of translators was quite unanimous when translating CTP. They all heavily relied on the machine, whether using LLMs or NMTs. Machines served as an equalizer of translation quality in a highly technical text. This might explain why AI literacy is not correlated with the translation quality of CTP.
A comparison of the interview data of participants with high AI literacy and low AI literacy shows differences in their perceptions. First, the high literacy group acknowledged the multiple functions of AI in assisting their translation, while the low literacy group failed to see this. AI is used to look up technical terms and concepts and get explanations. A30 noted that “Baidu Ernie Bot... I have many terms that I don’t know the meaning. So I directly ask the bot. It can help me quickly retrieve and answer my questions”. A12 found that “AI’s translation of terminology is very accurate. Therefore, I don’t need to look up these terms one by one...” AI is used for checking, polishing, or revising translations. A19 stated they “revised the translation based on AI’s work and edited against the original text”. A21 used ChatGPT “for polishing” or “to help me revise this sentence to be more fluent”. Providing context to the AI is seen as important for getting better results. A11 mentioned “put the context in for translation, so that the translation can form better coherence. If there is only one word or a single sentence, there is no context”.
AI is used to acquire background knowledge about the technical domain. A participant says, “If there are concepts I don’t understand, I will directly ask AI” (A11). Another explains, “I directly search for the meaning of the terms, and put the English in. It will give an English explanation. Then I say tell me in Chinese again” (A11).
Some participants use multiple LLMs and compare their outputs for verification. One states, “I have several language models on my phone... I will produce results from 2 to 3 different language models for the same content, and then compare if there are some subtle differences or some commonalities to confirm your source and the accuracy of this information from multiple perspectives”.
Second, the low literacy group showed an evident distrust towards LLM. In their perceptions, a significant limitation of LLMs is the accuracy and reliability issue, including “hallucinations” (A24, A18). Specifically, A24 said, “I think ChatGLM also fabricates things quite a lot sometimes”. A18 noted that it “will talk nonsense”. They also reported that AI-generated translations often suffer from stiffness and unnatural language, requiring significant human editing. A19 reported that “AI’s initial translation was not ideal” and the target text can be “very stiff”. This phenomenon can be explained by the Technology Acceptance Model (TAM) (Davis, 1993) in that their perceived usefulness is low. Their low trust in AI might hinder their usage.
The qualitative data provided strong evidence that participants with higher AI literacy perceive AI tools as more effective in improving translation quality. As a result, hypothesis 3 is accepted.
This study provides evidence that AI literacy influences the quality of English-Chinese technical translations. In answer to the first research question, AI literacy significantly enhanced translation quality for CTP but did not improve quality for OTP. In the conceptual translation task, participants with higher AI literacy produced more accurate and coherent translations, likely because they could effectively use AI to compensate for gaps in domain knowledge. By contrast, for the operational text, no overall quality gain from AI literacy was observed.
In response to the second research question, student translators’ perceptions of AI assistance differed markedly by their AI literacy level. Those with high AI literacy described AI tools as a helpful, even “indispensable” part of their workflow. These students reported using multiple AI systems, strategically combining and comparing outputs to fine-tune their translations. For example, highly literate participants used AI to look up and verify technical terminology, with one noting that “AI’s translation of terminology is very accurate”. They also felt that AI support reduced their anxiety and assisted them in making stylistic adjustments, trusting the technology to provide “trustworthy terminology solutions”. By contrast, students with lower AI literacy expressed skepticism toward AI-generated translations. The low-literacy group frequently pointed to issues of accuracy and reliability, for instance, mentioning that large language models sometimes “fabricate things” or produce “very stiff” language. They relied more heavily on their own knowledge and revision effort, believing that human judgment was needed to correct AI errors. Thus, higher AI literacy was associated not only with greater use of AI tools but also with a more positive, confident attitude toward AI assistance. In comparison, lower literacy fostered wariness about AI outputs.
The key findings can be summarized as follows: (1) AI literacy benefits conceptual scientific translation quality, likely by helping translators access and verify specialized knowledge; (2) no significant quality improvement from AI literacy was found for operational texts, where precise procedural language may be better handled through careful human editing; and (3) translator attitudes toward AI are divided by literacy level: high-literate students use AI tools more flexibly and view them as effective aids, whereas low-literate students remain cautious about AI reliability.
These findings have implications for translation education. In particular, they suggest that AI literacy should be incorporated into translator training curricula as a core sub-competence. There is a need to have more structured and comprehensive courses to teach technology in translation (Hazaea & Qassem, 2024). Teachers may teach how to use AI tools strategically and critically. Training might include exercises in comparing outputs from different AI systems, prompting techniques for obtaining accurate terminology, and methods for validating or post-editing AI-generated drafts. Emphasizing AI literacy will help future translators to leverage technology effectively, reducing anxiety about unfamiliar terminology and focusing their effort on high-level editing, rather than discouraging the use of such tools. At the same time, curricula are suggested to continue to reinforce traditional bilingual competence, ensuring translators maintain the capacity to produce accurate, fluent, and coherent translations even if AI tools are unavailable. In line with our results and existing competence models, we recommend that AI literacy be explicitly integrated into technical translation courses as a complement to domain and language training.
Several limitations of the present study should be noted. First, the sample was relatively small and homogeneous, which limits the generalizability of the findings. The texts used were both from the computer-science domain; different results might emerge with other subject areas or languages. Second, the two passages (conceptual vs. operational) represent only a subset of possible technical text types, so the findings may not extend to all forms of technical writing. Third, because participants chose whether and how to use AI tools, our design reflects naturalistic usage but cannot establish causality. Finally, the use of think-aloud protocols and post-experiment interviews provided rich insight into perceptions but might have influenced participants’ behavior. Future research should address these limitations by using larger and more diverse translator populations, including professional translators and varied language pairs, as well as a wider range of text genres. Future research might also explore the effects of formal AI literacy training. For example, intervention studies could train one group of students in AI tool use and compare their translation performance to a control group. Longitudinal studies would help determine how AI literacy and attitudes evolve over time and with experience.
[1] Albir, P. G. A. H., Galán-Mañas, A., Kuznik, A., Olalla-Soler, C., Rodríguez-Inés, P., & Romero, L. (2020). Translation competence acquisition. Design and results of the PACTE group’s experimental research. The Interpreter and Translator Trainer, 14(2), 95–233.
[2] Al-Quran, M. A. (2011). Constraints on Arabic translations of English technical terms. Babel-Revue Internationale de la Traduction-International Journal of Translation, 57 (4), 443–451.
[3] Alzain, E., Nagi, K. A., & AlGobaei, F. (2024). The Quality of Google Translate and ChatGPT English to Arabic Translation The Case of Scientific Text Translation. Forum for Linguistic Studies, 6(3), 837–849.
[4] Annapureddy, R., Fornaroli, A., & Gatica-Perez, D. (2025). Generative AI Literacy: Twelve Defining Competencies. Digital Government: Research and Practice, 6(1), 1–21.
[5] Braun, V., & Clarke, V. (2012). APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological.(pp. 57–71). American Psychological Association.
[6] Carrell, L. J., & Willmington, S. C. (1996). A comparison of self‐report and performance data in assessing speaking and listening competence. Communication Reports, 9(2), 185–191.
[7] Chee, H., Ahn, S., & Lee, J. (2024). A Competency Framework for AI Literacy: Variations by Different Learner Groups and an Implied Learning Pathway. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13556
[8] Christidou, S. (2018). Many roads lead to Rome, and we have found seven A control mechanism of bilingual scientific texts translations. Babel-Revue Internationale de la Traduction-International Journal of Translation, 64(2), 250–268.
[9] Davis, F. D. (1993). User acceptance of information technology: System characteristics, user perceptions and behavioral impacts. International Journal of Man-Machine Studies, 38(3), 475–487.
[10] Eckel, B. (2006). Thinking in Java (4th ed.). Pearson.
[11] Hazaea, A. N., & Qassem, M. (2024). Translation Competence in Translator Training Programs at Saudi Universities: Empirical Study. Open Education Studies, 6(1), 111-134.
[12] IBM Corp. (2020). IBM SPSS Statistics for Windows (Version 27.0) [Computer software]. IBM Corp.
[13] Jääskeläinen, R. (2000). Focus on Methodology in Think-aloud Studies on Translating. In S. Tirkkonen-Condit & R. Jääskeläinen (Eds.), Benjamins Translation Library (Vol. 37, p. 71). John Benjamins Publishing Company.
[14] Krüger, R. (2016). Textual degree of technicality as a potential factor influencing the occurrence of explicitation in scientific and technical translation. The Journal of Specialised Translation, 26, 96–115.
[15] Krüger, R. (2024). Outline of an Artificial Intelligence Literacy Framework for Translation, Interpreting and Specialised Communication. Lublin Studies in Modern Languages and Literature, 48(3), 11–23.
[16] Kuznik, A., & Olalla-Soler, C. (2018). Results of PACTE group’s experimental research on Translation Competence Acquisition. The acquisition of the instrumental sub-competence. Across Languages and Cultures, 19(1), 19–51.
[17] Leijten, M., & Van Waes, L. (2013). Keystroke Logging in Writing Research: Using Inputlog to Analyze and Visualize Writing Processes. Written Communication, 30(3), 358–392.
[18] Lommel, A., Uszkoreit, H., & Burchardt, A. (2014). Multidimensional Quality Metrics (MQM): A Framework for Declaring and Describing Translation Quality Metrics. Tradumàtica Tecnologies de La Traducció, 12, 455–463.
[19] Mathieson, F. M., Barnfield, T., & Beaumont, G. (2009). Are we as good as we think we are? Self-assessment versus other forms of assessment of competence in psychotherapy. The Cognitive Behaviour Therapist, 2(1), 43–50.
[20] Mohammed, T. A. S., & Al-Sowaidi, B. (2023). Enhancing Instrumental Competence in Translator Training in a Higher Education Context: A Task-Based Approach. Theory and Practice in Language Studies, 13(3), 555–566.
[21] Mohsen, M. (2024). Artificial Intelligence in Academic Translation: A Comparative Study of Large Language Models and Google Translate. PSYCHOLINGUISTICS, 35(2), 134–156.
[22] Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041.
[23] Petts, A., Veeramoothoo, S. (Chakrika), & Verzella, M. (2024). Setting Foundations: An Integrative Literature Review at the Intersections of Technical and Professional Communication and Translation Studies.IEEE Transactions on Professional Communication, 67(3), 285–300.
[24] Salimi, J. (2014). Machine Translation Of Fictional And Non-fictional Texts: An examination of Google Translate’s accuracy on translation of fictional versus non-fictional texts. https://api.semanticscholar.org/CorpusID:60228133
[25] Sharkas, H. (2013). The Effectiveness of Targeted Subject Knowledge in the Teaching of Scientific Translation. Interpreter & Translator Trainer, 7 (1), 51–70.
[26] Talebinejad, M. R., Dastjerdi, H. V., & Mahmoodi, R. (2012). Barriers to technical terms in translation Borrowings or neologisms. Terminology. 18(2), 167–187.
[27] UNESCO. (2024). AI competency framework for students. UNESCO.
[28] Zhang, W., Li, A. W., & Wu, C. (2025). University students’ perceptions of using generative AI in translation practices. Instructional Science.https://doi.org/10.1007/s11251-025-09705-y
All programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction. By “kind” I mean, “What is it that you are abstracting?” Assembly language is a small abstraction of the underlying machine. Many so-called “imperative” languages that followed (such as FORTRAN, BASIC, and C) were abstractions of assembly language. These languages are big improvements over assembly language, but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve. The programmer must establish the association between the machine model (in the “solution space,” which is the place where you’re implementing that solution, such as a computer) and the model of the problem that is actually being solved (in the “problem space,” which is the place where the problem exists, such as a business). The effort required to perform this mapping, and the fact that it is extrinsic to the programming language, produces programs that are difficult to write and expensive to maintain, and as a side effect created the entire “programming methods” industry.
The order of constructor calls was briefly discussed in the Initialization & Cleanup chapter and again in the Reusing Classes chapter, but that was before polymorphism was introduced.
A constructor for the base class is always called during the construction process for a derived class, chaining up the inheritance hierarchy so that a constructor for every base class is called. This makes sense because the constructor has a special job: to see that the object is built properly. A derived class has access to its own members only, and not to those of the base class (whose members are typically private). Only the base-class constructor has the proper knowledge and access to initialize its own elements. Therefore, it’s essential that all constructors get called, otherwise the entire object wouldn’t be constructed. That’s why the compiler enforces a constructor call for every portion of a derived class. It will silently call the default constructor if you don’t explicitly call a base-class constructor in the derived-class constructor body. If there is no default constructor, the compiler will complain. (In the case where a class has no constructors, the compiler will automatically synthesize a default constructor.)