SAK: This article was written using ChatGPT

It’s time we treat generative AI as an essential educational tool, not a threat.
AI generated image depicting a graduation cap made out of a circuit board. Graphic generated using DALLE through ChatGPT. (Hustler Staff/Daniel Sak)
AI generated image depicting a graduation cap made out of a circuit board. Graphic generated using DALLE through ChatGPT. (Hustler Staff/Daniel Sak)
Daniel Sak

A little over a year ago, I had a conversation with a classmate about this cool new site that could write an essay for you. 

She told me that her friend tried it out and got a surprisingly good score on that assignment. With heavy skepticism, I laughed it off and got back to work, not thinking much of it. That tool was ChatGPT, and I assumed that would be the last time I heard of it. A tool that writes essays for you has to be against the rules, right? 

Contrary to what I thought at the time, ChatGPT continued to be a subject of scrutiny over the next few months. Academia had to quickly come to terms with how we were to handle this new tool. When would it be appropriate to use generative AI? Is using ChatGPT “cheating?” What really counts as one’s own work in this new AI-assisted era?

It didn’t take long until Vanderbilt was forced to face the issue of AI usage head-on. On Feb. 16, 2023, the Peabody Office of Equity, Diversity, and Inclusion used ChatGPT to write an email in response to the school shooting at Michigan State University. Chancellor Diermeier condemned the email and clarified that AI usage would constitute a violation of the Honor Code in “most” cases; however, this policy came off as hypocritical given the recent scandal.  

While this incident took place only a year ago, policies regarding the usage of AI at Vanderbilt have changed considerably since then. Currently, the university’s policy states that “[f]aculty should decide whether and how generative AI is used in courses,” and they “should clearly communicate expectations to students.” The policy is clear, however, that it is the student’s responsibility to understand the rules for AI usage in each of their courses. To further complicate this policy, Vanderbilt disabled Turnitin’s AI Detector due to accuracy issues right before the start of the Fall 2023 semester, leaving professors with no alternative tool to detect ChatGPT usage.  

Vanderbilt’s position on AI leaves students and faculty in a tough spot. Students have to learn the specific AI policy for each professor which can vary significantly from class to class — with the risk of being punished if they misunderstand one of these course-specific policies. When you’re in five or six classes that have varying policies, it can feel impossible to keep up with them. The fear of misunderstanding course policy discourages students from using AI even when it’s allowed since they don’t always know the risk of being referred to the Honor Council.

On the other hand, faculty have no way of providing concrete evidence that a student has used generative AI in a manner that violates course policy, so a student who uses ChatGPT unethically has a very high chance of getting away with it. Under the university’s AI policy, everyone loses, from students attempting to follow the rules to professors trying to learn about AI and incorporate new technology into their pedagogy. It also leaves open significant possibilities for false accusations.

Under the university’s AI policy, everyone loses, from students attempting to follow the rules to professors trying to learn about AI and incorporate new technology into their pedagogy

I’ve faced these dilemmas with this novel tool. When ChatGPT first came out, I didn’t use it much in an attempt to abide by the ever-shifting university rules. For the first year of its existence, I used it as little more than a toy — asking it silly questions for my own amusement. Since the rules surrounding it were so unclear, I opted to completely exclude it from any of my class work regardless of what the professor wrote in the syllabus. I didn’t want to risk implicating myself inadvertently.

This changed when I took a class with a professor this semester who actively encouraged us to use ChatGPT and any other technologies in any capacity that we wanted. This freedom gave me the confidence to experiment with ChatGPT because I knew I wouldn’t be punished for trying something new. 

Embracing the new, I purchased the upgraded version of ChatGPT and began using it for various facets of my academic activities. I utilized it for research, uncovering and parsing through a breadth of information. I used it to assist me with my writing when I hit a mental roadblock or needed help with phrasing. It excelled at helping me dissect and analyze my readings and pinpoint their main ideas or themes. By generating a summary of lengthy reading assignments and asking ChatGPT clarifying questions, I actually felt I had a better understanding of the material without having to wait for class or going to office hours.

Outside the classroom, I also experimented with ChatGPT in other ways. In Anchor Marketing, a student organization I lead, we’ve used AI for brainstorming names and generating images. Using AI saves time on tedious tasks and allows us to accomplish things like logo and advertisement designs that previously required specific artistic skills and training. In my personal endeavors, I used it to update my Etsy store and product descriptions, resulting in my first sale after a year and a half of being in operation.

It may not come as a surprise that I used ChatGPT to write this article. At first, you might’ve thought that I directly asked ChatGPT to write an article in my own voice. I put all of my past opinion articles into ChatGPT to try just that, but I got a jumble of ideas from previous pieces all tossed together to almost form an overarching theme. It was not well-written, it was not my opinion and it was not something that I would put my name to.

Instead, I used ChatGPT to help write and enhance parts of this article. I had it generate over 60 different responses to help with my writer’s block and other tasks such as editing, structuring sentences, information-gathering and generation of the cover graphic. However, the thoughts, stories and opinions are all my own. Every element that ChatGPT wrote has been edited by myself and the same team of Hustler editors that edited other opinion pieces. 

Our current debate about ChatGPT’s inclusion in university courses misses these other equally valuable uses of the tool. We tend to think of AI generation as one quick-and-easy step: you ask it for an essay or image and it gives it to you. From my experience, the tool functions better when you engage with it in a “conversation.”  You train it with the information you want it to know, try out certain prompts and continue to adjust what you’re asking of it until you finally get what you’re looking for. You might have to make additional changes on your own after you’ve gone as far as you can with ChatGPT. In the end, you’ve put significant thought and effort into your end product; you just got to that point in a different way.

Having used AI for these purposes, I did not struggle to claim ownership of the work I’ve done with it because AI was only a tool. In the same vein, I would never say my papers aren’t my own simply because I used spell check. So why would we say that a paper is not a student’s work because they used AI to help them produce it? 

Treating ChatGPT as a legitimate tool rather than an illicit crutch invites us to rethink our approach to education, creativity and innovation. By harnessing its capabilities responsibly, we unlock new potentials for learning and problem-solving. Vanderbilt seems to understand this concept; they recently established the Vanderbilt Initiative on the Future of Learning & Generative AI to “make Vanderbilt a world leader in Generative AI.” Unfortunately, we can’t become a world leader in this area if Vanderbilt students fear repercussions for wanting to experiment with AI.

Treating ChatGPT as a legitimate tool rather than an illicit crutch invites us to rethink our approach to education, creativity and innovation. By harnessing its capabilities responsibly, we unlock new potentials for learning and problem-solving.

I recognize that there are some concerns regarding unrestricted introduction of generative AI into the classroom, especially in regards to students’ original, written thoughts. As a communications major and a writer for the student newspaper, I understand the importance of being a good writer.  However, we also have to consider that Vanderbilt students have already learned strong writing skills prior to coming here. Any other fundamental skills that need to be memorized, like coding basics in computer science or synthesis reactions in a chemistry course, can still be tested in a seated exam. As for the concerns regarding original thought, the reality of current AI tools is that they cannot create a meaningful final product without human assistance. If an assignment can be easily completed by ChatGPT without engaging the student in an intellectual capacity, we should be evaluating the merits of the assignment itself, not banning ChatGPT.

Furthermore, the consequences of not allowing students to learn with ChatGPT are too severe to ignore. Employers are starting to incorporate AI into their workflow, and we can only expect this integration to increase. Even Vanderbilt employees are “encouraged to harness the capabilities of generative AI and incorporate them into their day-to-day workflows.” In the meantime, Vanderbilt students are missing out on prime opportunities to learn skills like AI prompting as they prepare to enter a job market that’s increasingly demanding those exact skills. If Vanderbilt administrators want to lead the world in this rapidly expanding field, they should set an example by taking a firm stance in favor of students engaging with AI as a tool.

I know that this is a time of great concern and upheaval, and I understand why people are nervous about the future. The way to deal with this change isn’t to hide in the corner and cling to what we’re comfortable with. For better or worse, what’s worked in the past might not work in the future. The only way to deal with this ever-changing future is to prepare ourselves for it and to learn what we need to know to be successful in this new world. Vanderbilt, it’s time to remove the remaining barriers to AI usage and allow your students to prepare for this future.

View comments (2)
About the Contributor
Daniel Sak
Daniel Sak, Senior Staff Writer
Daniel Sak (‘25) is from Shelby Township, Mich., and is majoring in human and organizational development, political science and communication studies in Peabody College. Outside of The Hustler, Daniel serves as the president of Anchor Marketing. He can be reached at [email protected].
More to Discover

Comments (2)

The Vanderbilt Hustler welcomes and encourages readers to engage with content and express opinions through the comment sections on our website and social media platforms. The Hustler reserves the right to remove comments that contain vulgarity, hate speech, personal attacks or that appear to be spam, commercial promotion or impersonation. The comment sections are moderated by our Editor-in-Chief, Rachael Perrotta, and our Social Media Director, Chloe Postlewaite. You can reach them at [email protected] and [email protected].
All The Vanderbilt Hustler picks Reader picks Sort: Newest
Subscribe
Notify of
guest
2 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
A
Art made by no one means nothing
1 month ago

“Vanderbilt students have already learned strong writing skills prior to coming here.”

This is a dangerous and anti-education view of the situation. I’m Vandy ‘23 and a humanities major, and personal experience has shown me that plenty of Vandy students—even within my own humanities departments—desperately needed to take more writing-intensive courses during their time at the university, not to mention those non-humanities students who eke their way through only the required writing courses and can’t correctly use a comma and a conjunction at 23 years old with a diploma from a pseudo-Ivy. Granted, maybe the world we’re moving into is one in which these once-crucial storytelling skills will no longer be required, especially as I can see the merit in using AI for certain things in the corporate or coding spheres. But it’s still disingenuous to hide behind that bleak future as an excuse not to write your essay or even this article.

ChatGPT helped you organize your ideas? You’re an opinions journalist! That’s like your one job! I, for one, didn’t need AI to give me this idea:

Art made by no one means nothing.

V
VU Truth
1 month ago

Not all Vanderbilt students have strong writing skills coming into Vanderbilt. Affirmative Action admissions policies combined with DEI initiatives mean that academic merit are valued less than other factors.