There is a lot of buzz around AI’s (artificial intelligence) infiltration in higher education and higher education institutions, impacting many aspects. Including but not limited to admissions, teaching processes, and grading, all are widely discussed across different platforms. The role of chatbots is also a key point of discussion when it comes to personalized teaching options, but it has not been evolved for higher education yet. With the widespread use of AI technology, a breed of AI-detector apps has also appeared on the market.
In the fall of last year, the Boston University AI Task Force was formed to evaluate the impact of generative AI on higher education and research and also to review the policies that are being adopted across BU related academic institutes, schools, and colleges.
BU task force’s recommendations for a critical AI approach
These practices are not unique to BU, as colleges and universities across the country have been evaluating AI technology and its impact and forming policies and ethical frameworks around its usage. The challenging aspect is that artificial intelligence hasn’t matured yet and is still evolving, and new breakthroughs flash with extreme brightness, taking our sight away for the time being. All this started with the launch of ChatGPT, which is again making a disruption with its AI video generation app, Sora.
Looking at the advancements made in just a year, it’s really hard to predict what will be on the table, let’s say five years from now. Coming back to the educational prospects and BU’s efforts, the task force released its report on the matter after consulting with faculty and industry experts. The report titled “Report on Generative AI in Education and Research.” highlights quite a few points, but the main focus is on dealing with generative AI with a critical approach. This is advised addressing the faculty, what were they thinking about students though? Well, maybe it’s for everyone involved.
It is also advised that students must be informed about AI in their syllabus and educated on its capabilities and usage approach. BU also published the views of some of the top professors regarding AI integration, Professor Yannis Paschalidis of Engineering, when asked about the “critical embrace” of AI speaking practically, said,
“Acknowledge that AI is here to stay and it can be an effective tool that can be used to accelerate research and enhance teaching.”
Source: Boston University
He also added,
“In practice, this implies using it with caution. With the university adopting a policy to “critically embrace” AI, schools and colleges would be guided to adopt their own local policies adapted to the needs of the disciplines they serve.”
Source: Boston University
Professor Paschalidis says that the policies must be consistent with the broader university-wide policies. But he emphasizes that there must be some room for faculty to make their own decisions on the subject, to be adopted as per their class requirements.
AI-detector apps and their unreliability
The BU report also highlights the importance of usage of AI-monitoring programs, as students would be utilizing it already due to the availability of various apps and programs. But it wants to maintain and promote academic integrity and proceed with caution regarding AI. The AI-detector apps that detect AI usage, professor Pasachalidis views them not as bulletproof. That said, he asserts that these programs just provide an estimate of whether AI was used to generate certain text, audio, or video, but they are not 100 percent correct. So when assessing misconduct from an academic point of view, faculty have to be vigilant and not rely entirely on tools.
Another professor, Wesley J. Wildman of Philosophy, says that instructors and students might be using the same AI detection software and getting the same rating as they do for checking plagiarism. He says that AI detectors are not consistent at the moment, and they are also not that reliable yet. This is a question mark for their accuracy and reliability, as they can be manipulated with spelling and grammatical errors to some extent. For this very reason, it is being advised to the instructors to exercise caution when utilizing these tools.
BU’s effort and undertaking this task to produce a thorough report on AI usage, Professor Pasachalidis said that the formation of the task force was done in the later months of 2023 and in spring 2024 semester. The board contacted many experts, industry people, consultants, and experts at their very own BU. They took onboard people from every discipline as they brought perspectives original to their own field regarding AI. All these efforts were carried out so that the university could have a unified framework regarding the technology, assign resources for training, and to adopt new ways of teaching with the emerging tech.
On the same matter, Professor Wildman said,
“We heard from experts representing disciplines not already named to the task force.”
Source: Boston University
He expected that the important recommendations from the report would be implemented immediately, and some things might take longer to implement due to their nature and required resources. But the narrative of handling artificial intelligence with critical approach is being put into exercise without delay.
Find BU report here.