Boston University calls for ‘critical embrace’ of generative AI in new report

Boston University is among the latest higher education institutions to recommend its faculty not outright prohibit generative artificial intelligence tools.
man hugging robot
(Getty Images)

As higher education institutions around the country attempt to manage the explosion of new generative artificial intelligence tools, Boston University this month published a report calling for “critical embrace” of the quickly advancing technology.

The private nonprofit university’s report says the institution “should not universally prohibit or restrict the use of [generative AI] tools.” Instead, the report’s authors, a task force comprised of administrators and engineering and philosophy professors, wrote that Boston University should “critically embrace” generative AI, support AI literacy among students and faculty, supply resources to “maximize” the technology’s research and education benefits and “exercise leadership in helping faculty and students craft adaptive responses.”

The task force’s recommendations arrive as students by the millions are apparently using generative AI to write their research papers, according to research published this month by the software company Turnitin. Similarly, a survey published last May found that one third of university students were using ChatGPT to complete homework assignments.

Universities around the country have scrambled over the past year-and-a-half to develop policies that manage generative AI following the public launch of ChatGPT. The edtech firm Anthology in January published a framework to help institutions develop AI policies that work for the particular needs of their institutions.


In addition to pushing for generative AI’s embrace, Boston University’s task force also made several other recommendations, including that instructors should be free to define policies that are “suited to the learning goals of their courses.” It says the university should also require that every instructor include an AI policy on each course syllabus.

As quickly as generative AI tools have cropped up, so too have tools to detect their use, including a popular one developed by Turnitin, released in April 2023. Boston University’s policy advises instructors to “exercise caution” when using such tools and to only consider their output as one factor when evaluating whether students are cheating.

“Using them can generate both false positives and false negatives, which complicates evaluation of academic misconduct charges,” the report reads. “They may also be inconsistent, in the sense that the same query may yield different results, which makes it difficult for students and faculty to use detectors to protect themselves against the nightmare of academic misconduct charges based on false positives.”

Latest Podcasts