Human Oversight, Accountability, and Critical Evaluation: Takeaways from “Pamplin and AI Responsibility” Panel
More than 75 alumni and friends attended the Pamplin Society Signature Event last month and heard a distinguished panel discuss “Pamplin and AI Responsibility: Tools, Impacts, and Business Insights.”
An edited version of their discussion is below.
Saonee Sarker: How has AI changed your work and the roles you play?
Michelle Seref: AI is transforming education, and we’re integrating it into our curriculum. For example, students use AI tools in coding classes to debug or even generate code and then analyze the outputs. It’s about teaching them to use AI as a tool for enhancing their learning rather than replacing their problem-solving skills.
Marco Leung: At Deloitte, AI now influences nearly every aspect of our work, from client interactions to operational efficiencies. We developed an AI tool called “Psychic” for internal use, which 75% of our workforce actively engages with. It’s reshaping how we deliver value and how we think about hiring new talent.
Sujey Edward: In my role, AI is now essential to every service area we offer. AI allows us to respond faster and more effectively to client needs, but we emphasize understanding both its strengths and limitations. It’s crucial that AI complements human expertise, especially when it comes to strategic decision-making.
SS: AI skills seem to demand a premium salary. What does "AI skill" mean, and how do you define it?
SE: AI skills go beyond coding or technical knowledge; they involve knowing how to leverage AI for practical applications. For example, cybersecurity professionals must understand how AI can enhance threat detection. The skill premium reflects not only technical expertise but the ability to apply AI meaningfully in one’s field.
ML: From a consulting standpoint, AI skills mean knowing how to use it effectively within specific business contexts. Our focus is on helping employees leverage AI to produce better client outcomes. It’s more about how AI integrates into the professional workflow rather than just a standalone technical skill.
MS: For us in academia, teaching AI skills means preparing students to understand, interact with, and critically evaluate AI tools. AI literacy is a must, and students need to comprehend AI’s strengths and limitations so they can integrate it thoughtfully into their work.
SS: How do you ensure AI is used responsibly and ethically in your organizations?
ML:: At Deloitte, we emphasize trust and transparency in AI usage. We constantly evaluate the data we feed into AI models, aware of potential biases. We also have strict data governance measures to prevent issues like data leakage. We involve legal and risk teams to assess AI-related decisions carefully.
Ivo Djoubrailov: In the federal sector, we adhere to standards like the executive order on responsible AI use. This approach includes data ethics, transparency, and continual testing to ensure models align with agency policies. It’s about balancing innovation with accountability.
Sujay: IBM has an AI ethics board that scrutinizes AI projects based on use cases. For example, certain facial recognition applications might be acceptable in a combat zone but not on U.S. soil. Our focus is on ensuring AI applications align with ethical standards and mission needs.
SS: What are the biggest challenges in implementing AI responsibly?
Marco: One challenge is the lack of skilled professionals who understand AI’s complex aspects. AI development requires expertise in data engineering, statistics, neural networks, and more. As the technology evolves, so does the need for multidisciplinary skills, making it hard to find and retain talent.
SE: Keeping up with AI’s rapid advancement is also challenging. Traditional career paths may no longer apply as AI demands new skills. We need a steady pipeline of graduates with AI knowledge, but the industry is moving so quickly that it’s difficult to prepare for what comes next.
ML: Another significant challenge is moving from experimentation to production. Many AI projects remain as pilots due to difficulties in scaling and governance. It’s one thing to test an AI model, but another to implement it on a large scale where outcomes directly impact clients or the public.
SS: How do you assess AI’s output for accuracy and avoid over-reliance?
ML: At Deloitte, we apply human oversight for AI models, especially when making high-stakes decisions. For instance, AI helps us analyze data, but the final assessment and strategy recommendations are always validated by experienced consultants to ensure accuracy.
Sujay: We believe in the “human in the loop” approach. AI can produce remarkable results, but humans must review its outputs critically. For certain applications, especially in defense, AI must be rigorously tested, and the final decisions are human-driven.
MS: In education, we instill in students that AI’s output is only as good as the data fed into it. We emphasize the importance of validation and critical thinking so that they don’t rely blindly on AI but use it to augment their insights.
SS: How do you think the federal government will manage AI adoption with its current structure?
ID: Implementing AI in government requires patience and a tailored approach. Unlike private industry, where you can take risks with innovation, the stakes in government are high, especially regarding public trust and compliance. While we face challenges, there’s a strong push toward modernization.
SE: Government systems are complex, often using legacy systems. There’s a high level of accountability, which can slow things down, but it’s necessary given the critical nature of public-sector services. We’re moving forward with AI, but it will be gradual and carefully controlled.