top of page

Exploring generative AI at Harvard

Resource From The Harvard Gazette


 

Leaders weigh in on where we are and what’s next


The explosion of generative AI technology over the past year and a half is raising big questions about how these tools will impact higher education. Across Harvard, members of the community have been exploring how GenAI will change the ways we teach, learn, research, and work.


As part of this effort, the Office of the Provost has convened three working groups. They will discuss questions, share innovations, and evolve guidance and community resources. They are:


  • The Teaching and Learning Group, chaired by Bharat Anand, vice provost for advances in learning and the Henry R. Byers Professor of Business Administration at Harvard Business School. This group seeks to share resources, identify emerging best practices, guide policies, and support the development of tools to address common challenges among faculty and students.

  • The Research and Scholarship Group, chaired by John Shaw, vice provost for research, Harry C. Dudley Professor of Structural and Economic Geology in the Earth and Planetary Sciences Department, and professor of environmental science and engineering in the Paulson School of Engineering and Applied Science. It focuses on how to enable, and support the integrity of, scholarly activities with generative AI tools.

  • The Administration and Operations Group, chaired by Klara Jelinkova, vice president and University chief information officer. It is charged with addressing information security, data privacy, procurement, and administration and organizational efficiencies.



Klara Jelinkova, Bharat Anand, and John Shaw.

Photos by Kris Snibbe/Harvard Staff Photographer; Evgenia Eliseeva; and courtesy of John Shaw


The Gazette spoke with Anand, Shaw, and Jelinkova to understand more about the work of these groups and what’s next in generative AI at Harvard.


 

When generative AI tools first emerged, we saw universities respond in a variety of ways — from encouraging experimentation to prohibiting their use. What was Harvard’s overall approach?


Shaw: From the outset, Harvard has embraced the prospective benefits that GenAI offers to teaching, research, and administration across the University, while being mindful of the potential pitfalls. As a University, our mission is to help enable discovery and innovation, so we had a mandate to actively engage. We set some initial, broad policies that helped guide us, and have worked directly with groups across the institution to provide tools and resources to inspire exploration.


Jelinkova: The rapid emergence of these tools meant the University needed to react quickly, to provide both tools for innovation and experimentation and guidelines to ensure their responsible use. We rapidly built an AI Sandbox to enable faculty, students, and staff to experiment with multiple large language models in a secure environment. We also worked with external vendors to acquire enterprise licenses for a variety of tools to meet many different use cases. Through working groups, we were able to learn, aggregate and collate use cases for AI in teaching, learning, administration, and research. This coordinated, collective, and strategic approach has put Harvard ahead of many peers in higher education.


Anand: Teaching and learning are fundamentally decentralized activities. So our approach was to ask: First, how can we ensure that local experimentation by faculty and staff is enabled as much as possible; and second, how can we ensure that it’s consistent with University policies on IP, copyright, and security? We also wanted to ensure that novel emerging practices were shared across Schools, rather than remaining siloed.


What do these tools mean for faculty, in terms of the challenges they pose or the opportunities they offer? Is there anything you’re particularly excited about?


Anand: Let’s start with some salient challenges. How do we first sift through the hype that’s accompanied GenAI? How can we make it easy for faculty to use GenAI tools in their classrooms without overburdening them with yet another technology? How can one address real concerns about GenAI’s impact?


While we’re still early in this journey, many compelling opportunities — and more importantly, some systematic ways of thinking about them — are emerging. Various Harvard faculty have leaned into experimenting with LLMs in their classrooms. Our team has now interviewed over 30 colleagues across Harvard and curated short videos that capture their learnings. I encourage everyone to view these materials on the new GenAI site; they are remarkable in their depth and breadth of insight.


Here’s a sample: While LLMs are commonly used for Q&A, our faculty have creatively used them for a broader variety of tasks, such as simulating tutors that guide learning by asking questions, simulating instructional designers to provide active learning tips, and simulating student voices to predict how a class discussion might flow, thus aiding in lesson preparation. Others demonstrate how more sophisticated prompts or “prompt engineering” are often necessary to yield more sophisticated LLM responses, and how LLMs can extend well beyond text-based responses to visuals, simulations, coding, and games. And several faculty show how LLMs can help overcome subtle yet important learning frictions like skill gaps in coding, language literacy, or math.




Do these tools offer students an opportunity to support or expand upon their learning?


Anand: Yes. GenAI represents a unique area of innovation where students and faculty are working together. Many colleagues are incorporating student feedback into the GenAI portions of their curriculum or making their own GenAI tools available to students. Since GenAI is new, the pedagogical path is not yet well defined; students have an opportunity to make their voices heard, as co-creators, on what they think the future of their learning should look like.


Beyond this, we’re starting to see other learning benefits. Importantly, GenAI can reach beyond a lecture hall. Thoughtful prompt engineering can turn even publicly available GenAI tools into tutorbots that generate interactive practice problems, act as expert conversational aids for material review, or increase TA teams’ capacity. That means both that the classroom is expanding and that more of it is in students’ hands. There’s also evidence that these bots field more questions than teaching teams can normally address and can be more comfortable and accessible for some students.


Of course, we need to identify and counter harmful patterns. There is a risk, in this early and enthusiastic period, of sparking over-reliance on GenAI. Students must critically evaluate how and where they use it, given its possibility of inaccurate or inappropriate responses, and should heed the areas where their style of cognition outperforms AI. One other thing to watch out for is user divide: Some students will graduate with vastly better prompt engineering skills than others, an inequality that will only magnify in the workforce.





What are the main questions your group has been tackling?


Anand: Our group divided its work into three subgroups focused on policy, tools, and resources. We’ve helped guide initial policies to ensure safe and responsible use; begun curating resources for faculty in a One Harvard repository; and are exploring which tools the University should invest in or develop to ensure that educators and researchers can continue to advance their work.


In the fall, we focused on supporting and guiding HUIT’s development of the AI Sandbox. The Harvard Initiative for Learning and Teaching’s annual conference, which focused exclusively on GenAI, had its highest participation in 10 years. Recently, we’ve been working with the research group to inform the development of tools that promise broad, generalizable use for faculty (e.g., tutorbots).


What has your group focused on in discussions so far about generative AI tools’ use in research?


Shaw: Our group has some incredible strength in researchers who are at the cutting edge of GenAI development and applications, but also includes voices that help us understand the real barriers to faculty and students starting to use these tools in their own research and scholarship. Working with the other teams, we have focused on supporting development and use of the GenAI sandbox, examining IP and security issues, and learning from different groups across campus how they are using these tools to innovate.


Are there key areas of focus for your group in the coming months?


Shaw: We are focused on establishing programs — such as the new GenAI Milton Fund track — to help support innovation in the application of these tools across the wide range of scholarship on our campus. We are also working with the College to develop new programs to help support students who wish to engage with faculty on GenAI-enabled projects. We aim to find ways to convene students and scholars to share their experiences and build a stronger community of practitioners across campus.


What types of administration and operations questions are your group is exploring, and what type of opportunities do you see in this space?


Jelinkova: By using the group to share learnings from across Schools and units, we can better provide technologies to meet the community’s needs while ensuring the most responsible and sustainable use of the University’s financial resources. The connections within this group also inform the guidelines that we provide; by learning how generative AI is being used in different contexts, we can develop best practices and stay alert to emerging risks. There are new tools becoming available almost every day, and many exciting experiments and pilots happening across Harvard, so it’s important to regularly review and update the guidance we provide to our community.


Can you talk a bit about what has come out of these discussions, or other exciting things to come?


Jelinkova: Because this technology is rapidly evolving, we are continually tracking the release of new tools and working with our vendors as well as open-source efforts to ensure we are best supporting the University’s needs. We’re developing more guidance and hosting information sessions on helping people to understand the AI landscape and how to choose the right tool for their task. Beyond tools, we’re also working to build connections across Harvard to support collaboration, including a recently launched AI community of practice. We are capturing valuable findings from emerging technology pilot programs in HUIT, the EVP area, and across Schools. And we are now thinking about how those findings can inform guiding principles and best practices to better support staff.


While the GenAI groups are investigating these questions, Harvard faculty and scholars are also on the forefront of research in this space. Can you talk a bit about some of the interesting research happening across the University in AI more broadly?


Shaw: Harvard has made deep investments in the development and application of AI across our campus, in our Schools, initiatives, and institutes — such as the Kempner Institute and Harvard Data Science Initiative. In addition, there is a critical role for us to play in examining and guiding the ethics of AI applications — and our strengths in the Safra and Berkman Klein centers, as examples, can be leading voices in this area.


What would be your advice for members of our community who are interested in learning more about generative AI tools?

Anand: I’d encourage our community to view the resources available on the new Generative AI @ Harvard website, to better understand how GenAI tools might benefit you.


There’s also no substitute for experimentation with these tools to learn what works, what does not, and how to tailor them for maximal benefit for your particular needs.


And of course, please know and respect University policies around copyright and security.

We’re in the early stages of this journey at Harvard, but it’s exciting.

Commentaires


bottom of page