Academics and AI: Where is it going?

0
4

Artificial intelligence (AI) has taken the world by storm. The widespread use of large language models (LLMs) like ChatGPT, Grok, and now the video aspect with Sora, alongside AI bots being added to nearly everything, has contributed heavily to the growth of this technology. Now AI is entering the military, with Palantir signing a military contract with the U.S. government. AI has affected the academic world as well, which is the focus of this discussion. 

As a history and government major, AI looms over my field of study like a monster in the dark corner of your room. The ability to synthesize large amounts of material and scan it for common themes is incredible, but is also exactly what historians do. On the government side, AI spreads lots of misinformation and is essentially used as a confirmation bias machine. This is especially scary as people continue to turn to ChatGPT and hail its answers as holy scripture. It’s more important now than ever to remember that not everything on the internet is true. 

The academic world has felt the early effects of LLMs becoming widely accessible to students, particularly through a huge rise in cheating. These effects are first felt in assignment submissions, as students turn in work that is more specific than their actual knowledge. Often times, this results in submitting assignments that are completely wrong or incoherent. This not only affects their grades, but also harms their future, especially if a professor doesn’t catch it.  

As a student who doesn’t use AI, I see its effects most clearly in the classroom environment. There’s a noticeable difference between students who do the readings and those who don’t, or who use ChatGPT to summarize it minutes before class. I often find myself alone in class discussions because of this.  

There have been attempts to stop the rise of AI in academia, but there is no mistaking that it is here to stay. This has caused a huge difference in how individual professors and students approach AI. Among my peers, there are some who turn to ChatGPT more often than their textbooks, Google or their professors, and some who refuse to use it at all. Professors also differ in their usage, with some banning AI and others encouraging its use in their classes. This lack of consistency will be interesting to watch as it plays out. 

This inconsistency also trickles into how cheating with AI is defined. This obviously is a problem, but without clear standards on AI use, you cannot create firm rules for when it becomes excessive. It’s also becoming increasingly difficult to use the internet without using AI, and for those who don’t want to use AI at all, drawing that hard boundary has become more challenging. 

The lack of consistency on this subject will cause rifts down the road, and I am curious to see how it plays out. The rise of AI is extremely unsettling, especially considering humanity’s tendency to build entire societies based around the tools it uses. The idea of an “AI Age,” following the Bronze, Silver, Iron, and Digital Ages, is a little scary. It’s important to remember, though, that each of those transitions was once feared, but ultimately made humanity a little better. It may be difficult to rationalize how something as foreign as AI will make humanity better, but try to have some faith. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.