Hassan Soleimani; Maryam Mahdavipour
Abstract
In recent years, a number of large-scale writing assessments (e.g., TOEFL iBT) have employed integrated writing tests to measure test takers’ academic writing ability. Using a ...
Read More
In recent years, a number of large-scale writing assessments (e.g., TOEFL iBT) have employed integrated writing tests to measure test takers’ academic writing ability. Using a quantitative method, the current study examined how written textual features and use of source material(s) varied across two types of text-based integrated writing tasks (i.e., listening-to-write vs. reading-to-write) and two levels of language proficiency (i.e., high vs. low). Sixty Iranian English major students were selected through purposive sampling and divided into low and high proficiency groups based on an IELTS practice test. Then, they were required to write on a listening-to-write and a reading-to-write task. Results of two-way and one-way ANOVAs revealed that firstly, variations in integrated writing tasks together with level of proficiency had a significant effect on all the generated discourse features, secondly, the two types of integrated tasks produced features that shared to a large extent, and thirdly, some features could distinguish a certain level of proficiency. In addition, the results indicated that plagiarism is higher in response to the reading-to-write task than the listening-to-write task especially among the low proficiency writers. Implications of the study are presented.