AI Unveiled: Are We at the Peak or the Plateau?

Erin and Teresa dive into AI and its profound impact on higher education, underscoring the need for thoughtful adoption and careful review of policies and use.

34 minutes
By: Trusted Voices

In this episode, Erin Hennessy and Teresa Valerio Parrot dive deep into the fascinating world of artificial intelligence and its profound impact on higher education. Erin candidly shares her feelings of unpreparedness in the face of AI’s challenges, setting the stage for a thoughtful discussion on the diverse range of reactions to this emerging technology, from anticipation to trepidation. After emphasizing AI’s growing presence in our daily lives and its profound implications for academia, the hosts introduce the Gartner Hype Cycle as a valuable framework for comprehending the journey of new technologies and where AI is currently perched.

Along the way, Erin and Teresa note the necessity of oversight to keep pace with the ever-evolving AI landscape, echoing the recent release of an executive order by the Biden administration. They underscore the importance of approaching AI with responsibility and integrity, highlighting the need for ethical standards. As always, Teresa and Erin offer valuable advice on navigating the AI landscape by engaging with trusted voices on campus and students, including the formulation and periodic updating of institution-specific AI policies to align with the institution’s unique needs and mission.

Show Notes

AI in Industry

AI Terms

AI Governance


Read the full transcript here


Hello and welcome to the Trusted Voices podcast. I’m Teresa Valerio Parrot alongside Erin Hennessy. And in each episode, we discuss the latest news and biggest issues facing higher ed leaders through a communications lens. These conversations we’re normally joined by a guest who shares their own experiences and perspectives, but this season, we’re also making time for one-on-one conversations about what we are seeing, hearing and thinking. And today we have a theme.

So Erin, do you want to introduce us to our theme and kick off our conversation?


Sure. Today we are talking about, like everyone seems to be talking about, artificial intelligence and specifically how it intersects with higher education. And I will admit right up front that I feel pretty unprepared for, not our conversation today, I prepared for that, but pretty unprepared to advise our clients on how we navigate the challenges of AI. And I know it’s popping up across campus. It’s popping up obviously in the classroom. And in my role as an adjunct instructor, I have given thought to how you build an assignment that can’t be completed by AI, but instead helps our students figure out how to harness and use this new technology.

But it’s also showing up in enrollment and in student life and just basically every corner of the campus. And so this conversation is one that I’m really excited about even though I feel like I’m stumbling into this very, very new technology. So bear with us. And also, we’re hoping by the end of this conversation, we’ll have some great resources available in the show notes for folks to dig in and learn more, whether you’re at my level of understanding and comfort with AI or whether you’re at Teresa’s level of understanding and comfort.


And that’s really just a difference between, and Erin and I talk about this all the time, if we’re talking about ed tech or if we’re talking about some of the technologies used on campus in addition to what we traditionally call ed tech, I geek out on these kinds of things. So AI is one of those topics that I’ve been following, not always because I understand it, but then I want to understand it, but really because I’m seeing the impact and the opportunity for our industry. And I think that one of the things I’m recognizing these days is that a lot of this has been around for a while, and it’s seeing a new day and new credibility based on some of the name changes that we’re using and the hype around it.

And I know you’ll wanna talk to us about the hype. And to be honest, the fear associated with it, and then moving into some of the worlds in which we live, the marketing behind it as well. So, I think that there’s this interesting place for us to be thinking about where do we fit in it, how are different pressures and also opportunities coming together.

And I think it’s a great time for us to be talking about it.


Yeah, I want to note that in preparation for this conversation, I went on LinkedIn because I’m trying to use LinkedIn more. But I went on LinkedIn and posted and asked folks to share with me what they’ve been reading in order to get smarter, faster about AI. One person responded. Thank goodness it was a really smart person who had some really great perspectives. And so if you are connected with me on LinkedIn, I encourage you to go look at that post and see some of the resources and writers that were recommended in that response.

But I also think people just don’t yet know where to go and who to trust on whether this is something that is going to change my life and make it so much better, or it’s going to take my job and I’m going to be working for a computer one day.

And then, I don’t know, a week or two ago, somebody came out, some tech thinker came out and said, AI is going to kill us all. Like literally, we will be dead within five years. So I think, you know, between a lack of responsiveness on LinkedIn and sort of the range of promotional conversations going on out there, I think folks just don’t know how to even approach this topic and get smart fast about it.


Yeah, and I’m gonna go ahead and just date myself a bit here. I think I stopped believing in this, we’re all gonna die in X many years, many, many years ago. And I think we all lived, a number of us lived through that Hale-Bopp Comet moment that we were all gonna die, right? 


Some of us didn’t live through that moment though. Yikes.


Some of us didn’t, and I think, sorry everybody, but I think there are these opportunities where we see this fatalistic attempt. And I think instead, let’s think about where we see credibility around these topics. So I just watched a panel recently that was a collaboration of higher ed government and business voices. And this will be in the show notes, two versions, because I know everybody’s busy. One is if you wanna watch the panel. The second is if you want to see a media report of the panel, i.e., it’s basically crib notes, I think. So I think what we need to think about is what are people who are looking at these opportunities and ways for community to come together, like Paul was talking about.

Higher education was represented by Michael Crow. There was also representative Ted Lieu from California, and there was Slack’s senior vice president, who also participated and her name is Christina Janzer. What I thought was interesting is that Michael Crow was talking about the rigidity of higher education and how slow we are to change, which I don’t disagree with. He started to talk about some of the ways in which ASU is integrating AI into the classroom to help students get through gateway courses. And this is why I said some of this stuff has been along for a while is that he started talking about personalized education and that has been around for a while, including with the use of technology.

I worked with Lumina Foundation on a number of grants to help share the news of what their grantees were finding and integrating technology into the ways in which we retain students was really successful if you can add in the ability for the coursework to move at the pace that the student is learning and engaging with the content. That’s been around. So this is where I was saying, this is where marketing comes into this, because we’re talking about this as something new and something that’s scary. It’s been here.

And I also wanna give recognition to our colleagues who’ve been working on competency-based education for a long time. What do people already know so that we can put them into an environment in which they can move more quickly based on what they still need to know versus what they’ve already learned? And some of what Michael Crowe was talking about was that as well. And the pushback I got when I would talk about competency based education, probably about 10 years ago, was that it needed to have a big name champion and a big name institution to move it forward. So my hope for this moment is that ASU can be the institution. And Michael Crowe can be the voice that can move this from, uh, yeah, but does it work to instead how can we think about this more broadly in higher ed?


Yeah, and I think there are probably some other big players in that space, and they have the scale where they can make this truly part of what they are doing and deploy this kind of tool in a way where it makes sense for them and for their students and for their institution. And I think those big players, you’re absolutely right, that’s where the acceptance is first going to come from. Then, we’ll see sort of see how that cycle runs because we’ve seen the years when there was great distrust of these huge mega universities and the things that ASU and other institutions were trying to do. And now they’ve become more mainstream.

And you and I were talking about this yesterday, Higher Ed isn’t always given the grace to experiment and fail. We are always expected to experiment and succeed. And on some level, I get it. It’s an investment of time. It’s an investment of money. I don’t want to be your guinea pig. I want to get my degree or my credential, get out, get into the job market. But I think it’ll be interesting to watch how this unfolds.

One of the things that I’ve been thinking about a lot lately and shared with you is my semi-new obsession, the hype cycle, which is a concept I was introduced to by a faculty member. Who was attempting, God love him, but didn’t quite succeed to explain crypto and the blockchain to me. The hype cycle is this sort of way of thinking about new technologies. It’s the Gartner Hype Cycle, and we’ll put a link to definitions and also their 2023 assessment of tech and where different things are, including AI, on the hype cycle.

This is an audio medium, so you all can’t see me making the arc of the hype cycle, but it talks about how there’s this innovation trigger, this new technology that makes something break through into public consciousness. There is this peak of inflated expectations, which is where Gartner says right now, we are with various forms of AI. There’s the trough of disillusionment, which is also a potential name for my autobiography one day, where we start to see that a couple of big prominent experiments fail and folks feel like maybe this isn’t the panacea or the amazing technology that’s either going to save us or kill us in the end. And then you sort of ease into the slope of enlightenment where you start to see how the technology can really benefit different organizations, different kinds of institutions. And you start to see sort of the second and third generation where some of those issues have been addressed and the tech is becoming more responsive to what people need. And then you hit that, what they call the plateau of productivity, which I think is, you know, something that we could look at online education and see this same hype cycle.

You could look at certainly MOOCs, and it’s interesting. You can Google, the hype cycle and MOOCs and see how the year of the MOOC 2012 was the peak of that and has dropped. So I think based on that hype cycle, we are sort of at this peak. And I’m interested to see how long it takes for the line to drop and to come back up to that plateau of productivity where this is more accepted and looked at in a more perspective, both in the negative and the positive.


I also think there’s this reality again, as I’ve said before, and I’m going to say again, a lot of this has been around. So yesterday I did this quick experiment and that was as I was engaging with AI throughout the day, I thought, well, I’m just going to make a little note for myself. Our days are filled every day with AI if we’re thinking about it or if we’re not, right? I use Grammarly. If you use Grammarly, guess what? That’s AI based. Siri and Alexa, Waze, search engines that I’m using and how they’re populating my content for me. Looking at the algorithms that are being used on social media, thinking about chat bots and how those are being integrated in higher education and in our own lives, including through travel, which is where I seem to experience them most.

And then I use Otter AI. And I’ve used it for years. I think I was one of those people who was encouraging everybody to start using Otter AI, because I think it’s really, really strong. So it’s a program where you can use it to record and to transcribe information. And what I love about it is, I’m sure everybody’s noticed, I have very particular ways in which I pronounce words. And a lot of transcription services, haha, I hear you laughing, a lot of transcription services would make it look as if I was speaking a foreign language with the recommendations that they would give. And there’s an influence on that foreign language bit. And Otter has figured out my speech patterns. It’s figured out if it’s me speaking or Erin speaking or who it might be, and it’s picking up on their speech patterns as well to give us really strong transcriptions. And we’re using this stuff. So let’s think about how we’re using it. And then let’s also think about if there are safety nets in ways that can take away some of this fear and anxiety that we’re hearing too.

So right now we’re anticipating and we think next week will be released the Biden administration’s anticipated artificial intelligence executive order. And what that’s really supposed to do is to help regulate all of this evolving technology and it says to help with the fear and hype around the world. So I think we are starting to see the point where not only is it necessitating regulation from the government, but I think it also is shifting to a place where we need to think about what that regulation is for ourselves as well. So I have a whole thing about that, but let’s if you have anything to say on where I just went, I’m happy to do that. And then we’d love to go into my rant.


Your whole thing?


My whole thing. 


No, I think you’re absolutely right. And it is everywhere and it is sort of a marketing question of now we’re sort of packaging this. It was interesting to see that this executive order is gonna drop on, I think Monday is when they’re planning to release it. I think it’s interesting because we are in this ongoing conversation about social media, the pluses and the minuses, and internet and disinformation and this conversation about how we should have regulated some of this stuff sooner. And so it’ll be interesting to see what’s in that EO and whether or not this ends up being an earlier enough intervention to perhaps, not save us, but perhaps protect us a bit more from some of the downsides of AI. And I have to think that moving this quickly is in some way at least informed by misinformation on the internet and social media and those challenges that we are grappling with, whether it’s mental health, whether it’s young women being influenced by negative perceptions. All of the issues and problems we know about social media, whether this is substantive enough and early enough to protect us a bit from some of the downfalls.

Ad Read:

Salesforce is a proud underwriter of Volt’s coverage of AI in higher ed.
How can your institution unlock meaningful value from AI throughout the student journey? Salesforce AI delivers trusted, extensible AI grounded in the fabric of the #1 CRM platform. Create customizable, predictive, and generative AI experiences that increase faculty & staff productivity and build learner relationships for life. To learn more visit,


Please go ahead with your rant. I’m so excited. I’m just going to put myself on mute and sip my seltzer and let you yell for a minute.


Here we go. You can’t see it, but I’m dancing. So I have this amazing colleague. His name is Cayce Myers and he is at Virginia Tech. The show notes include a podcast that he did for the Virginia Tech Curious Conversations podcast. And his area of expertise as a faculty member is to talk about law, public relations, journalism and communications and how all of those are coming together. And right now he is in hot demand to be thinking about the legal terrain associated with AI. And I think that’s a really important topic.

And the other thing that I will say, it’s in the show notes, and I would love to extend an invitation to people, I’m on the planning committee for the American Marketing Association’s Symposium for the Advancement of Higher Education, which will be in mid-November in Chicago. And I asked the committee to please have him present a pre-conference session. So if you’re going to the conference, on Sunday, he is going to be talking about these discussions.

And I think it’s really important for us to be thinking about what is the application of generative AI for all of us in MarCom? What kind of intellectual property issues are raised? What are privacy issues? How does this tie into discrimination? What is liability associated with this? For example, disinformation, defamation, et cetera, et cetera, et cetera.

So he’s going to talk about what all of this looks like and then project to the future so we can think about what else we should have on our radar screens. So this is the legal part of this.

But the personal part that I think is important is I’ve been thinking about as we have these tools and as we use them to make our work easier, I want for us to remember the responsibility that we each have if we’re using ChatGPT or other applications to give credit and to give recognition to those that we may use to blender their ideas to come up with our own. I think we have this really important moment for all of us to understand and to think through how we’ve gotten here and whose ideas have helped us shape our own and to make sure that we’re thinking about that. As we’re using these different technologies in ways that help us to build relationships with others, to remember that the technology is supposed to be a tool, not supposed to be the way in which we connect with others. And so we need to maintain that human element in all of this. And I think in our let’s go faster and do more mode, we are forgetting the people and the community that we are with, and we are not respecting each other sometimes in the ways that we should. So think about what those applications are that Cayce may think about and talk about and have scholarship on, and then think about how you are maintaining and living your own personal and professional ethics and keeping relationships strong. That’s it.


That’s it.


For now, keep going.


All right. I mean, I’m very impressed. That was concise and I feel like I understood most of it, which is great. To your point of sort of acknowledging when we use these tools and how we use these tools, going back to that LinkedIn post that I mentioned, friend of ours who chimed in and responded to me put something at the end about how he had used, quote, his “AI stack,” which has been something he’s worked on since 2019 to create the answer that he posted on my LinkedIn. And I’m a bright person, but I also know when I need a little more. And so I asked him to sort of explain what that means. And he put in this response and it’s really fascinating and goes to your point.

He walked through, these are all my own ideas I used AI in some of these ways that you mentioned, Teresa, that we don’t even think of as AI, including Siri and Otter AI in particular, to pull my thoughts together while I was on a road trip. I dropped everything into ChatGPT. I gave it specific instructions to clean up my stuff, but not add anybody else’s. And it’s really interesting.

And I think that’s a great example of what you’re talking about in terms of being able to say, and here’s how I got there. And here’s when it’s my ideas that I’ve used AI to clean up and make me more productive and make this process easier. And here’s when I’ve used AI to bolster or support my own arguments. And here’s where I have just gone, I know nothing about this. Let me have this thing produce something that I can respond to, think about, and use to build off of. And I think what Justin did in that comment really highlights exactly the kind of ethical approach to AI that you’re advocating for.


Yes. And I think there are ways that we can use chat GPT and some other programs and options to say what we want to say, using that blendering of other people’s ideas into a summary, into an op-ed, into whatever it is that you use as your prompt for that program. And the way I like to think about this is to say, I was considering, put your prompt and I, received, and I was influenced by this entry included.

Something that gives the credence and the recognition, because again, it’s part of your processing. How are you thinking about that? Because once you get to that blendering and smoothie of other people’s ideas, it may not be as easy for you to include the hyperlinks to give them the credit back to the scholarship and to the thought leadership that they’ve put into this.


Yep. And we all know in academia in particular, in other industries and fields and leadership positions as well, that kind of integrity and that kind of citation and that kind of acknowledgement of your sources is hugely important and can be a career ender if you blow that. So…


And actually, if I can play off of that for a second, here’s one of the ways I saw that this could be applied and how it cannot just make you, but also make your president, your leaders, your institution vulnerable. I was just talking to someone in the last couple of days, and they were saying that they used chat GPT to kind of blender a whole bunch of ideas to write a speech for their president. And I got sick to my stomach about that because I said, OK, go ahead.


I was going to say a little bit of sweat right there, a little bit of sweat. Yeah.


Right? And I can see that this is happening and I can see that this could be a fantastic approach to use. But I need for everybody to stop when they get their results and take a really close look at it and say, “is this ethically representing my president and the ideas of others? And does this put us in a strong light?”

And you and I talk quite a bit as we’re looking at proactive and reactive to be thinking about that balance of the court of law and the court of public opinion. So going back to what Cayce says, are you gonna be called on because you’ve stolen somebody else’s ideas? And as importantly, are you going to be called out because you’re not recognizing that you have been influenced by others?


Yeah. So after we talked yesterday in preparation for this recording today, and since we did reference when that Biden EO is gonna drop, we should probably reference that today is the 26th of October. So that drop is expected on Monday the 30th. So by the time you hear this, it will be out already, and I’m sure you’ll have seen some great coverage of it.

But after our conversation yesterday, what I thought about was sort of how, what kind of rules of the road or what kind of guidelines or things should we be keeping in mind as we think about engaging with AI? Because I think all of us agree that building a wall around it and trying to keep it out of our lives is … we’re too late already. And it simply isn’t the smart approach. Our students need to know about this. We need to know about this as we continue to chart our own careers in higher education and communications and leadership.

So here I’m going to give you my one, two, three, four current thoughts and welcome you to tell me what you agree with, what you would reshape. My first thought is having done some Googling yesterday, it was fascinating to see all of these pieces of journalism and op-eds and coverage come back with question marks in the headlines. And I think that just sort of speaks to the moment that we’re in. This right now is generating so much attention, but so many of those headlines have question marks, which tells me we’re still sort of early in that hype cycle and we’re still trying to figure this out. And so I would just sort of keep that in the back of your mind.

And I think the other thing is to be skeptical of both ends of the spectrum, to be skeptical of somebody who’s going to tell you this thing will kill us. And to be skeptical of anything that anyone who tells you that AI is going to solve your most intractable, biggest, hairiest problems. I think villainizing this is foolish, and I think it’s a waste of energy. And I think that applies to all technology. I don’t know that there are technologies that are inherently good or bad. Social media, AI, the internet. It’s all about how we teach people to use them and what’s appropriate and honestly how we regulate those tools that really dictate whether they are “good” or “bad.”

And then lastly, I want us to remember that many of us who are listening to this work on college campuses and there are internal trusted voices on our campuses, not just external voices, but internal ones who can help us figure this out, who can tell us how it really works, what it can really do, what the real pitfalls are that we need to watch out for. So think about reaching out and having a conversation with your CIO, with your comp sci faculty. We found some interesting voices around AI and business faculties at various institutions. So don’t just look externally, make sure you’re tapping into those resources internally. That’s sort of where my head is at and I welcome any additions or revisions that you might have for those rules.


Yeah, I think I love the idea, and I always love the idea because I view myself as so curious about everything. Tiringly so, right, Erin? 


Mmm, no comment.


And for that reason, it’s how do we pull these topics apart in the way that you just talked about? I would add journalism. I would add law. I would add whoever runs your teaching, learning, and excellence program. I saw something interesting on Blue Sky the other day. So connect with us on LinkedIn, connect with us on Threads, on Blue Sky. There were faculty members who were saying that they are encouraging their students not to use chat GPT. And what they realized is the overall sophistication of student writing has decreased. And they have to be okay with that because they are asking the students to be vulnerable. And it means that you then have to help them build back. They were using a tool to overcome something or to help them or to bridge something. And so if you want them to do this on their own and if you are putting them in this different space than how they have created solutions for themselves, you have to have the ability to work with them to get to where you want them to be. I thought that was fascinating. 

So that’s all to say, I think teaching and learning areas on campus are really working through how to use this and how to make sure that professors are ready for this moment because this is interesting for all of us.


Yeah, and I, sorry, I was just gonna tack onto that. You have a great list of additions there, and I would say also engage with your students to figure out how to use these things because with so many of these technologies, they’re getting there before we are, they’re more comfortable with it than we are, and there are real opportunities to work together with our students to figure out how to chart this path forward rather than us sort of pushing policies top down.


Right. We had a faculty member who raised something that was interesting, and that was what she’s most working with students on right now. She’s embracing AI, is she is working with students on their ability to ask sophisticated and depth questions. And so she was saying, they’re actually learning the content more deeply because in order to develop the prompts that will result in what it is that they want, the sophistication of how they even comprise that question is really, really important. So we can add in the show notes kind of how she’s thinking about that as well. But if I were to summarize how we’ve talked about this, this is what would be my advice. 

I think we all know that AI is integrated into our lives. And I think my takeaway and my suggestion is, let’s be smart about what it is, how it can be used, and as importantly, how it should be used.


Yeah, I think that’s really important. And I think because it wouldn’t be a conversation between the two of us if we didn’t talk about the need for policies and for guidelines. At our institutions, there’s a UNESCO survey that says less than 10% of schools and universities have policies related to AI, which makes sense considering how early in that hype cycle we are. And I would encourage our listeners, our colleagues, to start thinking about how you build policies around AI, not just in the classroom, but elsewhere, to incorporate those things that you mentioned like citations and.

But also, we always tell folks that they need to look at their policies regularly and revise and refresh them to meet the current moment. I would tell you that your AI policies and guidance should probably be at this point on an every-six month tempo in terms of re-looking at it and refreshing it to meet the current moment. I don’t think we can create policies and just walk away from them particularly in this area because it’s just changing too fast.


With that change, I would say, having lived a previous policy life, make sure that you’re looking holistically at your policies as well, because I’m seeing some pop-up AI policies that don’t go back and look at the totality of the policies for the institution. And there are some missed opportunities and also some conflicts in what’s now being rolled out.


And think about how you can use AI to look at your policies and help you with that process, because we know it’s onerous. And I think the last thing I would throw out there is to really be smart and thoughtful about how you’re bringing your board along with developing these policies, but also just the educational piece. Some of your board members may be in roles and positions where this is something they’re thinking about for their businesses as well. And some may be less engaged and perhaps more trepidatious about what this all means. So thinking about an education process for your board, particularly in the higher ed context, I think is really important as well.


So as we think about these policies, I’m gonna go back to where we started this conversation. And that’s to say that some of this has been around for a long time. If you’re looking for a place to start, think about those institutions that were early adopters of personalized learning. They’re thinking about privacy in ways that perhaps we all need to be thinking about privacy. Competency-based education, because they’re thinking about how does this still fit with what your accreditor wants? And think about some of those institutions that are already popping up these AI policies.

We normally think this is something that we need to go to an MIT for, or this is something that we need to go to an Ivy League institution or a Stanford or an institution that we think of as tied to technology. What I would challenge us to do is to say who has been doing the work and go to those institutions. Brandman is considered the top institution for competency based education. Look to see what they’ve done. The CSU system, the California State University system, has done a significant amount of work with personalized education.

It doesn’t just have to be those that we assume are the go-tos. It may be those that have been quietly doing the work and also those that are popping up those policies these days.


Right. And I think also it’s worth saying that, you know, you mentioned MIT, that MIT’s approach and use of AI is going to be vastly different than pick a small, private, middle of the country, you know, middle size, pick any other institution. 


A regional comprehensive, a community college. Exactly, exactly.


Yes. And so AI is going to look and feel different. AI is going to be used differently. AI may not even be something that your institution is ready to think about and talk about yet because of the students you serve, the control that you have, the mission that is part of your institution.

So yeah, we counsel clients all the time, don’t feel like you have to be on the latest social media platform just because somebody else is on the latest social media platform. If it doesn’t work for you, it doesn’t work for you. If it doesn’t reach your audience, it’s not gonna help you.

And I think it’s the same kind of thing with AI, that MIT’s approach likely isn’t going to fit anybody but MIT. And CSU’s approach isn’t going to fit anybody but the CSUs. And so think about before you just launch yourself into AI, what needs do I have that this particular tool can help me meet? And then think about how you incorporate that, rather than saying, we need to buy everything and do everything and create policies that really don’t have any relevance to our actual institution and the students we serve.


And while you’re at it, do a policy search to see if you still have language that was put in during MOOC-a-palooza.


Oh my gosh. You know, there’s some part of me because of the role I was in at that point and the people that I worked for where I feel some responsibility for MOOC-a-palooza. But I definitely will take some credit for using the term MOOC-a-palooza. But thank you for the opportunity to go back to the early 2010s and relive that.

With that, before we get into any more trouble than we already have or before you start dancing again, I think we will call this a wrap. We certainly encourage you to look at the show notes. We’ve dropped a bunch of links in there, many that we mentioned, some resources that we didn’t get time to touch on. And we encourage you to share back with us the tools and the things you’re reading and what’s helping you make sense of this particular moment in technology and higher ed.

We want to say thank you so much for joining us this episode, this new approach to an episode for this season. A reminder that you can find links in the show notes to some of the topics and articles referenced in our conversation today, and remember that you can always contact us with feedback, questions or guest suggestions at

Be sure to follow Trusted Voices wherever you get your podcasts, and we also invite you to check out Higher Voltage, another podcast on the Volt Network hosted by our friend Kevin Tyler. Until next time, thanks to Teresa Valerio Parrot, DJ Hauschild, Aaron Stern, Nicole Reed, and the entire Volt team for a great episode, and thank you for listening.

Trusted Voices

Trusted Voices


Trusted Voices explores the complex intersection of leadership and communication in higher education. Each episode, hosts Teresa Valerio Parrot and Erin Hennessy chat with university presidents, industry thought leaders — and each other — about the latest news in the industry and the challenges and opportunities facing those in the most visible roles in higher ed.

Newsletter Sign up!

Stay current in digital strategy, brand amplification, design thinking and more.

Also in Podcasts

A graphic design with the title 'Trusted Voices' and the words 'with hosts Teresa Valerio Parrot, Erin Hennessy,' and showing the pictures of two women, both with light skin and light brown hair, one of whom is wearing glasses. All of this is set against a black background that gets orangeish around the images of the two women.

10 Resolutions for Higher Ed in 2024

Erin and Teresa explore 10 areas in which higher education can do better in 2024 and offer indispensable advice to leaders seeking to address evolving issues proactively.

By: Trusted Voices
Paul LeBlanc, president of SNHU, is a light-skinned man with white hair and beard, wearing a dark suit, white shirt, and blue striped tie.

CBE Prepares Students for an AI-Driven Future – Paul LeBlanc

Erin and Teresa welcome SNHU President Paul LeBlanc to discuss AI, competency-based education and preparing students for a changing workscape.

By: Trusted Voices
An image of Dr. Brit Kirwan and Higher Voltage host Kevin Tyler.

Be Visible, Be Engaged, but Be Circumspect – Dr. Brit Kirwan

Dr. Kirwan explores the delicate tightrope of leadership communications and the need to protect higher ed’s most important tenet: free speech.

By: Higher Voltage
A graphic design with the title 'Trusted Voices' and the words 'with hosts Teresa Valerio Parrot, Erin Hennessy,' and showing the pictures of two women, both with light skin and light brown hair, one of whom is wearing glasses. All of this is set against a black background that gets orangeish around the images of the two women.

What We Learned From Our Conversation with Eric Hoover

Teresa and Erin discuss the potential value of a collective, collaborative campaign around the value of a higher education.

By: Trusted Voices Podcast
Graphic design with a black background and a bubble with an orange rim; inside the bubble is a picture of a woman with light brown skin and dark hair looking straight out at the viewer, smiling. Two bubbles next to this one say 'Higher Voltage' and 'with guest Natasha Warikoo.'

Is the Supreme Court About to End Affirmative Action in Higher Ed?

Natasha Warikoo of Tufts University joins the show to discuss the potential end of race-conscious admissions.

By: Higher Voltage
A light-skinned man with wavy, short, brown hair wearing a black suit coat and light purple shirt.

Degree Choices Hopes to Reduce US News’ Rankings Dominance

David Levy discusses how higher ed rankings are doing a disservice to Black applicants and the importance of comprehensive data collection.

By: Higher Voltage