We asked our current Masters Scholarship award student Daria to share a personal perspective on the use of AI tools and ChatGPT.
With Open AI’s future for ChatGPT as an open-source and not-for-profit enterprise being questioned in recent months, regulators are actively negotiating with key stakeholders on suitable AI controls. Just last month, the UK government adopted a framework for regulating AI with the expectation that sectoral regulators will provide direction to businesses at the end of April 2024, and in anticipation of new legislation in the near future. AI is still being discussed frequently, if not even more since it first gained popularity.
While this is pending on a governmental and global scale, universities have had a mounting challenge with students using ChatGPT to cheat in online exams as well as plagiarism and have had to create their own guidance, policies, and codes of good practice.
Plagiarism itself is not new – it has always been unethical to claim someone else’s ideas as your own, which universities have had well-established procedures against for a long time. However, AI has pushed it to a scale and efficiency which presents a challenge of its own sort. The proliferation of AI tools among students has led to many institutions tightening their methods for assessment.
The most popular of these have been Large Language Models (“LLMs”), with ChatGPT being the most commonly used. The algorithm applies a number of rules and parameters on a huge source of existing data, comprising published online entries up to January 2022. This has then pinpointed which words, in which order, are most commonly combined together. The patterns then allow the algorithm to generate a response it predicts as being the most likely to pair best with the text entry.
However, as ChatGPT warns, it cannot guarantee facts or accuracy. I asked it and it clarified that it “is a tool for generating human-like text based on the input it receives. It does not have decision-making authority, and users should exercise their own judgment when interpreting and acting upon the information provided by the model”[1].
I have found this to be true in testing it against my legal knowledge, as it gets facts and legal reasoning of judgments, and even key legal principles, wrong a lot of the time. At my undergraduate university, my professors have tried asking it exam questions, and many said they would fail the answers it gave, with a further select few academics experimenting with additional parameters confirming that it ‘did not get good marks’.
Not only is the algorithm prone to glitching and generating ‘phantom’ facts and sources, it often fails to self-correct even when given the opportunity to do so. Out of curiosity, I once asked it to categorise a company’s offices by country and continent and noticed that it placed Australian offices in North America. I asked it to correct this, which it did, but it then completely omitted to categorise others. The amusing apology generated afterwards, unfortunately, does not make up for its limits.
ChatGPT's ability to generate persuasive answers in response to questions has generated a very popular and easily accessible easy fix. Unfortunately, this has led to many students relying on it as a quick solution. In a UK barrister’s survey of 900 undergraduate students, one in six admitted to cheating. Universities have attempted to tackle this in many ways.
As a result, methods for assessment differ across universities: no one institution handles AI identically. For online exams, many universities have started implementing plagiarism checkers and AI-detection tools, and increasing numbers of students have since been caught out for plagiarism and non-permitted use of AI. This has been particularly true of modules which have coursework submission elements or use platforms which permit browsing. Others have switched to an in-person format, with many requiring exams to be handwritten.
On my current programme, the London School of Economics (“LSE”) uses an in-person exam format as well as enhanced plagiarism and AI checks for coursework. Interestingly, this in-person format permits students to take exams with their laptop (or other device, as arranged in advance) so long as they install the exam software which blocks all third-party apps from running – open- or closed-book policy then differs according to each course, as professors have discretion to decide whether students can bring in their own notes and books.
This approach differs from my previous university, where students could sit their online exams anywhere on campus with access to other apps available. However, the realistic feasibility for browsing during exams was quite limited: we had to answer four highly complex questions in 3 hours. Any students who pasted plagiarised or AI-generated material have had their exams declassed or marked at zero, which had significant implications for their degrees. This number was very small as students have been coached and reminded constantly about the risks of plagiarism.
If prospective students at UK universities come from a different jurisdiction where plagiarism controls are less stringent, it will be important for them to review the AI-use and anti-plagiarism policies at their future universities so that they know what is expected of them. The standards can sometimes be higher than expected.
Particularly for students who have grown accustomed to using AI as an aid or complement to studying, they may need to re-evaluate their study methods. This is because any immediate submission of assessment work where AI-generated material was pasted will be flagged by AI-checkers. University departments have sometimes been known to use several checkers at once, so it is highly inadvisable to find workarounds – in many ways, it is much easier to pursue courses without recourse to AI than to constantly troubleshoot.
Thankfully, many universities are well-equipped to advise students on how to make the most of their degrees. Often there are departments and/or dedicated staff specifically trained to teach students how to study effectively, with special availability for international students too.
In my experience with applications, ChatGPT has been helpful for me with rephrasing: if I am struggling to express a particularly complex idea, it is good at providing an array of options for expressing the concepts I want to convey more concisely. It has also been useful for adapting tones and registers.
Otherwise, it is not helpful for writing applications. As the applicant market for companies and institutions in the UK is extremely competitive, recruiters and admissions officers are well-versed in spotting AI-generated applications. ChatGPT is not well-equipped to write a directly relevant, deeply personal application.
For research, I have found ChatGPT to be helpful with explaining complex terms and concepts where there is a wide body of published material on the subject matter often backed by good expertise. In particular, I found it immensely helpful with understanding finance for one of my courses. As the finance industry benefits from having an active online community which publishes many reports, discussions, and insights, there is a lot of material for LLMs. I could ask it to ‘act out’ a scenario and use examples which felt natural to me.
It is important to emphasise, however, that AI is not useful for research itself, even if it may form part of the preliminary goal-setting and idea generation process. For postgraduate study, an advanced understanding of research methods is required which LLMs simply cannot match. Students must be aware of ongoing research developments, which means any LLM’s utility is curtailed at its cut-off date, such as ChatGPT’s January 2022 cut-off.
At this level, students must also be capable of applying their newfound expertise in innovative ways – as LLMs specifically rely on material that is already published, it does not have an inherent capacity to innovate. Although it is sometimes capable of identifying information deficits, it is not well-equipped to plug them with new ideas. Moreover, students are expected to cross-reference between much better-equipped advanced research tools: using LLMs beyond their language capabilities does not lead to research which provides useful contributions to any field.
For me personally, no. Any value I know I can add to an institution is deeply personal and cannot be accurately conveyed in an application with greater than de minimis help from ChatGPT. To satisfy my curiosity, I have experimented with various iterations, but have been disappointed every time. I find far more value from staying up-to-date with industrial news and the excellent industry insights provided by my university and from engagement with my peers. If I do use ChatGPT, it would only be for help with minor rephrasing or for playing out hypothetical scenarios. However, any inference or idea I come up with will be all mine.
This is not to say that AI should be completely abandoned. On the contrary, an interesting question that many in the job market are currently asking is: how will AI affect my industry? Many services firms have started adopting bespoke AI solutions to make their work processes more efficient and deliver better solutions for clients. In law, it is becoming increasingly important for me to understand the dos and don’ts of how ‘legal tech’ can improve the firm’s or the clients’ work. For prospective students then, having some familiarity with the benefits – and most importantly – the limits of AI can serve their future well.
[1] OpenAI. (2023). ChatGPT (Nov 23 version) [Large language model]. https://chat.openai.com/chat