Turnitin, a smart tool for plagiarism detection?

Plagiarism is a serious issue in academia. In this blog post, I will talk about Turnitin, a service used for plagiarism-checking by some journals and conferences in academia.

I already wrote a blog post about this, which you can read here:

How journal paper similarity checking works? (CrossCheck) | The Data Mining Blog (philippe-fournier-viger.com)

Today, I just want to show you that this service is in my opinion not very “smart”. Although, this service is useful to detect plagiarism, I notice that it also sometimes flag some very generic text that in my opinion should not be considered for evaluating plagiarism. I will show you seven excerpts from a Turnitin report for a conference paper that I submitted as example:

(1) In a sentence with 23 words, Turnitin flagged that there is a similarity with another source because I have used six words in the same order as another source, even though there is no more than three words that appear consecutively

(2) At another place in the paper, Turnitin flags a similarity because I have used a same keyword as in another paper, while all other keywords are different.

(3) At another place, the submitted paper is considered similar with another paper because I say that in this paper I propose something novel (!):

(4) Here is another example that shows how not “smart” this tool can sometimes be (in my opinion). Having a section called “Experimental evaluation” and saying that we will assess the performance of something is considered similar (!).

(5) Another example of very generic sequences of words that are deemed similar:

(6) And another example of paragraph, where some sentences are said to be similar to four different sources but actually, all of this is just some very generic text used to describe experiments with a same dataset as in another paper:

(7) And in the conclusion a few words are said to be matching with another source but this is not relevant:

By the above examples, I just want to show that Turnitin can sometimes flag some text as similar but the similarity can be due to using some very generic text. Sometimes, this is due because an author tend to the same writing style between different papers and sometimes it is also because there are just not so many different ways of explaining sometimes. For example, here is one more example:

In the last sentence, if I want to say that the next section will talk about some new algorithm, there are not so many ways that I can say that. I could way, “the … algorithm will be presented/described/introduced in the next section” or “The next section will present/describe/introduce the … algorithm”. But I do not see many other ways of explaining this.

Conclusion

In this blog post, I have shown some examples that I consider as “not smart” produced in a Turnitin report. But to be fair, I should say that Turnitin will also flag some text that is very relevant or somewhat relevant. Here I just want to show some examples that do not look relevant to highlight that there still a lot of room for improvement in this tool.

As for the use of Turnitin, it is certainly useful for plagiarism detection, but like any other tools, it also depends on how the results are used by humans. Unfortunately, I notice that many conferences and journals do not take the time to read the reports and instead just fix some thresholds to determine what is acceptable. For example, some conference stated that “similarity index should not be greater than 20% in general and not more than 3% with any single source”. This may seem reasonable but in practice this is quite strict as there is always some similarity with other papers. Ideally, someone would manually check the report to determine what is acceptable.

This entry was posted in Academia, Research and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *