The success of a research process hinges on the literature review. A literature review (LR) is an academic work that examines the current work on a specific topic. It implicitly demonstrates the author’s knowledge and understanding of the extant literature on the topic. LR is not a mere collection of earlier works; it is a critical evaluation of them.
The thorough analysis of the existing literature allows one to identify the weaknesses in them, so that one may research to fill this gap or gaps. Besides identifying the knowledge gap or gaps, LR also meets many other requirements, such as developing the relevant research theory, placing the research in context, and highlighting what the research will add to the existing body of knowledge. Hence, LR is “The analysis, critical evaluation, and synthesis of existing knowledge relevant to your research problem, thesis, or the issue you are aiming to say something about” (Hart, 2018, p. 3 & 4).
LR begins by collecting relevant works to consult on a particular topic of choice. Next, the researcher needs to quickly run through the matter and decide which is relevant and which is otherwise. He or she needs to gain complete mastery over the subject by understanding it in depth and analyzing it critically. It is easy to realize that all the above steps require hard work, both physical and mental, and plenty of time. It is no wonder that in the process, the researcher gains expertise in the subject when he does it manually. However, all this may change or may be changing with the advent of Generative AI, or Gen AI as it is commonly called; some even shorten it to GAI. This article discusses the challenges Gen AI poses and other ethical issues in LR.
Generative AI
Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio, and synthetic data (Lawton, 2013). Although AI, short for artificial intelligence, is not new, the introduction of machine learning algorithm, generative AI that could create real-looking images and voices mimicking real people, has opened the floodgate of opportunities. Additionally, large strides in Large Language Models (LLMs), with trillions of parameters, have enabled the tools to generate huge amounts of written content besides images, graphics, and videos.
This blog is limited to discussing the opportunities and challenges, in particular to a literature review, that Gen AI poses and not how to use them. Tools like Scispace, chatpdf, Scite, Elicit, and Research Rabbit, to name a few, have changed the way LR was done till a few months ago. These tools can access enormous amounts of literature, read it, and answer researchers’ queries all in a few seconds. The researcher saves a colossal amount of time and energy. AI has several possibilities to support humans (Floridi, et al., 2018). This is undoubtedly a big boon to research.
However, on the other side, when the machines (Gen AI tools) search and sift through the contents, will the researcher gain the knowledge that he or she used when it was being done manually? Like deepfakes, are we creating “scholarshipfakes”? Is creativity at stake?
According to Bubeck et al. (2023), the latest models of LLMs developed show close to human-level performance and are superior to the earlier versions. The tools produce high-quality responses that cannot be identified as synthetic data. The contents do not raise any suspicion in the human mind as to its veracity. This is both a boon and a bane. It is a boon since it can produce rich academic or educational content. It is a bane because it can be biased and wrong. It can be wrong because it is as intelligent as it has been trained. Surprisingly, it can also act more intelligently than it is; this action is dubbed as hallucination. Before discussing hallucinations, let us look at what bias means in the GAI context.
Bias
In the Gen AI context, bias is interpreted in different dimensions. As mentioned earlier, GAI’s output heavily relies on the input data in which it was trained; therefore, its content can be skewed or is accurate only to the extent of input. Hence, the term bias in GAI refers to the unfair or skewed output that it generates. In order to use Gen AI ethically or correctly, it is essential to understand bias in more significant detail.
Bias Types
There are different types of biases. They are i) Selection bias, ii) Representation bias, iii) Temporal bias, iv) Confirmation bias, and v) Groupthink bias.
Selection bias happens when the Gen AI is trained in data drawn from one particular group or area that does not represent the population. For example, if an AI model is trained using text from one region, say India, it may fail to generate content relevant to other regions, say Japan.
Representation bias is somewhat similar to selection bias. When the AI is not trained enough to represent different groups, it will underrepresent certain groups. For example, if AI is trained in the images of Indian women wearing saris, it will not fairly represent the women with different dresses.
Temporal bias is the inability of the AI to use the later data than the historical data that it was trained on. Thus, it would continue to generate outdated viewpoints.
Confirmation bias occurs when the AI unintentionally perpetuates the stereotypes. For instance, an AI may inadvertently presume an engineer to be a man and a nurse to be a woman.
Groupthink bias happens when an AI generates contents that represent the dominant opinions in a group. This overpowers the diverse ideas.
Undoubtedly, different techniques are used to remove the biases mentioned above. Data Augmentation, Adversarial Training, and Re-sampling Techniques are some approaches to alleviate the bias in Gen AI. Nevertheless, we must be vigilant while using AI tools, and identify and address these biases. We still need to use our intelligence and caution with Gen AI. It is very important that we are not over-reliant on these powerful yet not infallible tools.
Hallucinations
Hallucination in Gen AI reference describes an output by GAI that is either fictitious or nonsensical. Recently, in New York, an attorney of 30 years standing, representing a client suing the Colombian airline Avianca over an injury sustained in the midflight, submitted a brief to the court. The judge overseeing the suit called six “bogus judicial decisions with bogus quotes and bogus internal citation” to support his position. The fact is that the attorney relied on ChatGPT for his legal research, and the Gen AI produced six fictitious judicial decisions and insisted that they were real and available in major legal databases while the fact is otherwise (Rubenstein, 2023).
Hallucinations by Gen AI are similar to human hallucinations. In humans, one sees figures and faces amongst the shadows, clouds, etc. In the case of AI, hallucinations happen due to many reasons, such as inadequate or inaccurate training data, overfitting of data, problems with encoding and decoding, and high model complexity.
A major concern regarding Gen AI in academics is the ethical issues that it can impact. Though the issues may not be new to AI, what is new is that it is becoming more and more difficult to detect the fakes. Albeit a race is on between the creators of Deepfakes and its detectors, it is too early to say who will win the race. Besides the above, Gen AI raises many other risks and issues that may not be relevant to our discussion on the literature review. Hence, the article does not discuss business risks like infringements and their impact on the workforce.
Since the pressure to publish (“Publish or Perish”) pushes researchers, some unscrupulous people take resort to unethical practices. Unethical practices include data falsification, plagiarism, and off-the-shelf availability of ready-to-publish manuscripts by “paper mills.” For the unaccustomed, “paper mills” are illegal organizations producing and selling ready-to-publish manuscripts that ghostwriters prepare. These manuscripts are prepared on demand and lack genuineness. This is purely a commercial activity. Some leading publishing houses, such as Nature Publishing Group, Wiley, Taylor & Francis, etc., have discovered some instances of “paper mill” produced articles (Murudkar, 2021). In the latest update, Finland intends to downgrade a few leading publishers by the end of 2024 (Anonymous, 2023). Despite editors and reviewers being highly vigilant about the manuscripts and using advanced tools such as iThenticate and PlagScan to detect plagiarism, it remains challenging to the day.
Academic Dishonesty
Technology advancement is to help the human being live happily and comfortably. However, whether technology is good or bad depends on how it is used. There are many instances of technology being misused (Ercegovac & Richardson Jr., 2004). Academic dishonesty is a broad term that includes plagiarism, among many other things. Whereas plagiarism constitutes “the unauthorized use of the language and thoughts of another author and the representation of them as one’s own” (Webster’s College Dictionary, p. 1032), academic dishonesty is “forms of cheating and plagiarism that involve students giving or receiving unauthorized assistance in an academic exercise or receiving credit for work that is not their own” (Kibler, 1993). Though academic dishonesty seems more relevant for the students, it applies to researchers too when they have their work done by their students and do not give credit to them and unfairly claim authorship.
Plagiarism
Some authors feel strongly about plagiarism and have compared it with stealing and insulting. Even when one completely paraphrases someone else’s idea and fails to cite the source, it amounts to plagiarism. Therefore, the golden rule is to cite when in doubt. Sometimes, people resort to “patchwriting”. Patchwriting is “copying from a source text and then deleting some words, altering grammatical structures, or plugging in one synonym for another” (Howard, 1999).
Academic dishonesty is an umbrella term that goes beyond plagiarism. Besides data integrity, Ghost authorship and Gift authorship are the two most common occurrences. Authorship provides recognition and other benefits such as rewards, promotions, financial gains, and prestige. These multiple benefits spur one to adopt unfair means for personal gain.
An author is defined as the one who provides a major contribution (Davidoff, 2000). This can include a person who starts an idea or a plan (Longman Dictionary of Contemporary English). While working with senior colleagues or with professors, the junior colleagues or scholars are hesitant to discuss the authorship issues. In the instant of a dishonest senior author, the author will not name the person who contributed to the research, did data analysis, or wrote the manuscript. This is unfair and should be despised even if the person has been monetarily or otherwise compensated. The person who has not been given the due credit that he or she deserves is known as Ghost Author. There are different versions of ghost authorship with modification of the process. Ghostwriting is more common in medical literature (Lacasse & Leo, 2010). The practice of ghostwriting is unacceptable and considered as a dishonest activity by the World Association of Medical Editors (WAME).
In contrast to the above is the Gift Authorship. The gift authorship also goes by other names, such as “Guest” and “honorary” authorship. In this case, an individual who has not worked on the research of the manuscript is bestowed authorship for many reasons. The director, the departmental head, a senior colleague whose presence increases the chances for publication, and, at times, “quid pro quo” arrangements are bestowed authorship. Nevertheless, journals do not encourage these practices and do not accept the authorship claims. However, many policies and actions exist that prohibit gift authorships. One should remember that authorship requires hard work, and it is to be earned. It is a matter of pride and part of one’s qualifications. It should neither be denied nor donated.
The present system for ensuring the quality and integrity of research is not foolproof, says John Crace in an article titled “Peer trouble” ( (Crace, 2003). In the said article, he reported that, “ A recent study by the University of Minnesota of 4,000 researchers in more than 100 faculties found that one in three scientists plagiarised, 22% handled data “carelessly” and 15% occasionally withheld unfavourable data.”
Besides the above concerns in the LR, one should also consider the following points:
- Confidentiality – Researchers may have the privilege of accessing sensitive or confidential information. The researchers should ensure that they maintain the respondents’ trust and do not disclose confidential or private information, unless explicitly agreed upon by the respondents.
- Objective information – Researchers, like any other human beings, will have their own minds and opinions on the matter of research. Nevertheless, they should ensure that they objectively evaluate the data and avoid being influenced by their biases. They should not present misleading information.
- Conflict of interest – This is an important aspect of research. The researcher should disclose any potential conflict of interest that could affect the outcome of LR.
Literature reviews are cornerstones of any research. While there may be varied reasons for doing LR, the following two reasons stand out.
- Understanding the current state of research on a topic.
- Discerning the research gaps.
Hence, LR should be given the utmost importance in research. The generative AI is a game changer. For the tech-savvy researcher, it is undoubtedly of great help. The researcher should be wary of the pitfalls of relying heavily on Gen AI. One can circumvent the dangers of using Gen AI by being careful in double-checking the veracity of the contents generated by it. Gen AI tools are like a double-edged sword; they are helpful as long as they are used rationally and can be harmful and damaging if used inappropriately.
For a good literature review, the researcher should be unbiased and objective. He or she should follow simple principles and be ethical. She (including him/he), should be transparent and avoid misleading inferences. She should be rigorous in collating evidence that is representative of the whole.
Let us master generative AI and use it diligently to create high-quality literature, and reviews thereby enhancing the standard of research and research articles.
Anonymous. (2023, December 30). Paper Mills, Predatory Journal, Predatory Publisher. Retrieved December 31, 2023, from Publishing with Integrity: https://predatory-publishing.com/mdpi-frontiers-and-hindawi-journals-may-be-downgraded-in-finland/
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXivLabs-2303.12712.
Crace, J. (2003, February 11). Peer trouble. Retrieved January 1, 2024, from The Guardian: https://www.theguardian.com/education/2003/feb/11/highereducation.research
Davidoff, F. (2000). CSE Task Force on Authorship. Who’s the author? Problems with biomedical authorship, and some possible solutions. Science Editor, 23(4), 111-119.
Ercegovac, Z., & Richardson Jr., J. V. (2004, July). Academic Dishonesty, Plagiarism Included, in the Digital Age: A Literature Review. College & Research Libraries, 65(4), 301-318.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018, November 26). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689-707.
Hart, C. (2018). Doing a Literature Review : Releasing the Research Imagination (2 ed.). London: SAGE Publications Ltd.
Howard, R. M. (1999). Standing in the Shadow of Giants: Plagiarists, Authors, Collaborators. Perspectives on Writing: Theory, Research, Practice. (Vol. 2). Stamford, CT: Ablex Publishing/JAI Press Inc.
Kibler, W. L. (1993). Academic Dishonesty: A Student Development Dilemma. NASPA Journal, 253.
Lacasse, J. R., & Leo, J. (2010, February 2). Ghostwriting at elite academic medical centers in the United States. PLoS Medicine, 7(2).
Lawton, G. (2013). What is generative AI? Everything you need to know. Retrieved December 29, 2013, from AI Technologies: https://www.techtarget.com/searchenterpriseai/definition/generative-AI
Longman Dictionary of Contemporary English. (n.d.). Retrieved January 1, 2024, from https://www.ldoceonline.com/dictionary/author
Murudkar, S. (2021, September 30). Paper Mills- A Rising Concern in the Academic Community. Retrieved December 31, 2023, from Enago Academy: https://www.enago.com/academy/paper-mills-a-rising-concern-in-the-academic-community/#:~:text=What%20is%20a%20Paper%20Mill,promotion%20buy%20publication%20ready%20manuscripts.
Rubenstein, A. (2023, May 30). Morning Brew. Retrieved Decembwe 31, 2023, from ChatGPT is not quite ready to be your lawyer: https://www.morningbrew.com/daily/stories/2023/05/29/chatgpt-not-lawyer?mbcid=31642653.1628960&mblid=407edcf12ec0&mid=964088404848b7c2f4a8ea179e251bd1&utm_campaign=mb&utm_medium=newsletter&utm_source=morning_brew
Webster’s College Dictionary. (n.d.).