Two Years of ChatGPT: What We’ve Learned and What Comes Next
I first learned about ChatGPT in December 2022 while scrolling through TikTok before bed, a bad habit I’d picked up during the pandemic. At the time, I was working in the technology department at a 6-12 school in Baltimore County, where things were starting to normalize after three semesters of virtual and hybrid instruction. As one use case after another popped up on my For You Page, I quickly realized how disruptive this tool could be in classrooms. What I didn’t anticipate was how deeply generative AI would influence my own career trajectory and the work I now do with CELTT. Here, I’d like to reflect on how the conversation has evolved over the past two years and share what’s ahead for the University of Baltimore.
The pace of generative AI innovation since November 2022 has been extraordinary, with major companies making significant strides. OpenAI’s latest model, o1, showcases advanced reasoning capabilities, tackling complex problems across a range of fields. Additionally, Meta introduced a groundbreaking new model type, Google made strides in podcasting with generative AI, and Anthropic launched tools to enhance content creation and computer-to-computer interaction. The field is also moving toward multimodal models capable of processing diverse data types, such as text, images, and sound, unlocking possibilities in areas like drug discovery and climate tracking. Meanwhile, smaller language models are emerging that outperform older, larger models while requiring less computational power. This shift has the potential to make generative AI more accessible and energy-efficient, further broadening its impact across industries and disciplines.
While these advancements are exciting, they also raise important challenges. The environmental impact of AI remains significant, with data centers consuming vast amounts of energy and resources, contributing to electronic waste. Bias in AI systems persists, as models can amplify the biases present in their training data, leading to discriminatory outcomes. Privacy is another key concern, as user-provided data can inadvertently become part of future model updates. Additionally, while generative AI is improving, it remains imperfect, occasionally producing inaccurate or fabricated information, a phenomenon often referred to as “hallucination.” These issues highlight the need for responsible AI development that prioritizes transparency, fairness, and accountability.
Since March 2023, UBalt has taken significant steps to engage with AI meaningfully. Through workshops, courses, research initiatives, and a dedicated summit, we’ve built a foundation for exploring the opportunities and challenges of this technology. Moving forward, the Academic Affairs AI Steering Committee’s Approach to Generative AI will guide our efforts, CELTT will continue its Generative AI Professional Learning Community and AI in Practice webinar series, and the Merrick School of Business will expand its offerings through the new Master of Science in Artificial Intelligence for Business program. UBalt’s comprehensive approach reflects its commitment to preparing students, faculty, and staff for an AI-driven future.
What started as a curiosity sparked by social media has become a cornerstone of my work at CELTT, shaping not only how I engage with technology but also how UBalt approaches innovation. The rapid evolution of generative AI has been both inspiring and challenging, but UBalt’s proactive engagement positions us to lead in this transformative era. Together, we’re not just adapting to change; we’re embracing it, ensuring we leverage AI responsibly to enhance learning, teaching, and innovation.