Angel Acosta: Contemplation, Healing, and AI

An inteview with Angel Acosta, founder of the Acosta Institute.

Angel Acosta: Contemplation, Healing, and AI

We’re excited to welcome to the podcast Dr. Angel Acosta, an expert on healing-centered education and leadership. Angel runs the Acosta Institute which helps communities process trauma and build environments for people to thrive.

Listen on Apple, Spotify, and YouTube.

He also facilitates leadership programs at the Garrison Institute that support the next generation of contemplative leaders. With his background in social sciences, curriculum design, and adult education, Angel has been thinking deeply about how artificial intelligence intersects with mindfulness, social justice and education.

In our conversation, we explore how AI can help or hinder our capacity for contemplation and healing. For example, does offloading cognitive tasks to AI tools like GPT create more mental space for mindfulness? How do we ensure these technologies don’t increase anxiety and threaten our sense of self?

We also discuss the promise and perils of AI for transforming education. What roles might AI assistants play in helping educators be more present with students? How can we design assignments that account for AI without compromising learning? What would a decolonized curriculum enabled by AI look like?

And we envision more grounded, humanistic uses of rapidly evolving AI—from thinking of it as "ecological technology" interdependent with the natural world, to leveraging its pattern recognition in service of collective healing and wisdom. What guiding principles do we need for AI that enhances both efficiency and humanity? How can we consciously harness it to create the conditions for people and communities to thrive holistically?

We’d like to thank our friends at the House of Beautiful Business for sparking our relationship with Angel—we highly recommend you check out their events and join their community.

Let’s dive into our conversation with Angel Acosta.

Transcript (from Apple Podcasts):

Welcome to Artificiality, Where Minds Meet Machines.

We founded Artificiality to help people make sense of artificial intelligence.

Every week, we publish essays, podcasts, and research to help you be smarter about AI.

Please check out all of Artificiality at

We're excited to welcome to the podcast Dr.

Angel Acosta, an expert on healing-centered education and leadership.

Angel runs the Acosta Institute, which helps communities process trauma and build environments for people to thrive.

He also facilitates leadership programs at the Garrison Institute that support the next generation of contemplative leaders.

With his background in social sciences, curriculum design, and adult education, Angel has been thinking deeply about how artificial intelligence intersects with mindfulness, social justice, and education.

In our conversation, we explore how AI can help or hinder our capacity for contemplation and healing.

For example, does offloading cognitive tasks to AI tools like Chat GPT create more mental space for mindfulness?

How do we ensure these technologies don't increase anxiety and threaten our sense of self?

We also discuss the promise and perils of AI for transforming education.

What roles might AI assistants play in helping educators be more present with students?

How can we design assignments that account for AI without compromising learning?

What would a decolonized curriculum enabled by AI look like?

And we envision more grounded humanistic uses of rapidly evolving AI from thinking of it as ecological technology interdependent with the natural world, to leveraging its pattern recognition in service of collective healing and wisdom.

What guiding principles do we need for AI that enhances both efficiency and humanity?

How can we consciously harness it to create the conditions for people and communities to thrive holistically?

We'd like to thank our friends at the House of Beautiful Business for sparking our relationship with Angel.

We highly recommend you check out their events and join their community.

Now, let's dive in to our conversation with Angel Acosta.

Angel, thanks so much for taking the time to talk to us.

We're really excited to be able to have a conversation on a podcast with you.

Perhaps you could start off by telling us about the Acosta Institute and what got you interested in AI, related to the other work that you do.

Yes, such a pleasure to be here, both of you.

Really wanna give thanks to the House of Beautiful Business for connecting us.

You know, my work straddles two institutions, the Acosta Institute and the Garrison Institute.

At the Acosta Institute, you know, we're really trying to build and foster healing-centered learning environments and healing-centered cultures.

So we've spent about five years building that organization and really encompasses 15 years of research, of really helping communities process institutional and collective trauma and doing it in a way that allows us to pay attention to our assets and what are the conditions that require, that are necessary for communities to thrive.

So we've hosted all kinds of conferences and workshops and events to help educators in particular, but also consultants and leaders develop a healing-centered approach to how they facilitate community building and facilitate restorative practices.

So I'm really grateful for that work.

And at the Garrison Institute, I run the fellowship program there, supporting the next generation of contemplative leaders in the field with just expanding their reach.

And at the core of my work is I'm a social scientist by training.

So my doctorate was in curriculum and teaching.

And I finished my doctorate at Columbia.

I spent five years researching how to design curriculum for not just public schools, but also for organizations who are thinking about the development of their employees.

So I spent a lot of time thinking about adult education.

So I spent enough hours thinking about what are the key elements needed to design a vibrant and dynamic learning experience.

But I went deeper in terms of connecting mindfulness to social justice and to leadership.

And all of that resulted in me focusing on healing.

And in terms of AI, if you think about mindfulness, social justice and leadership, artificial intelligence bursts through all of those fields.

And so for me over the last two years, as I've seen the AI technologies evolve rapidly, I began to pay attention to how they were impacting adult learning and adult development.

And I felt that I could have something to say and do to help communities adapt to what seems to be one of the greatest evolutionary leaps in our species' history.

Some scholars, as you know, are equating this moment to, similar to the Industrial Revolution, maybe bigger.

So I just found it irresponsible of me to know what I know about education and to have the skill set that I have and not put myself in the middle of helping community members become more confident in their development of AI skills.

And also, more importantly, I think, processing the anxieties around that kind of technology, which requires such delicate handling.

So we started to host workshops.

We titled them Artificial Intelligence for the Rest of Us.

And it was a play on what was happening at the time with the wonderful, the deep respect for Sam Altman and the others.

But this concentration of power in regards to these exponential technologies at the hands of a few.

So the name Artificial Intelligence for the Rest of Us we try to play around with the idea that we also need to concentrate power and community in terms of allowing us to understand how to use these technologies.

And those series, that series has been really successful and really fun.

We've had 85-year-old come in to learn how to use ChatGPT.

And sometimes with them, you know, just keeping them on, on the call, it's a success.

And then we developed a provocative conversation around this question.

Can AI heal us?

That was an event we hosted with the House of Beautiful Business.

There's a wonderful two-hour conversation on YouTube right now that you can find.

And then we also produced a Can AI Heal Us?

magazine, which will be released in the coming months.

So we've been playing around a lot with AI, and we've also developed some custom GPTs that we have launched for the community, which I can speak to that a little bit more later.

And lastly, this idea of leveraging AI to decolonize education as a provocative endeavor.

I think that's kind of fascinating.

But I want to...

I mean, the first question that comes to my mind here is like, what about the...

If you don't deal with the anxieties up front, because you mentioned that a couple of times, if you don't deal with the anxieties up front, what happens?

How important is it to do that as a specific step?

Yeah, you know, from a very tactical perspective, the longer you wait to deal with the anxiety around AI, the longer you'll wait to jump in and play.

So, for example, when Chat GPT was initially released to the general public, it took me six months just to kind of like dance around and make sense of what was happening.

Like, what is this?

And it wasn't until like six months in that I was kind of like, I don't even know if I belong in this field, I don't understand the technologies, what is the language models, what is this thing called scraping, what are the ethical implications.

It took me six months to get over all the data that I had to process to feel comfortable, even to create a ChatGPT login.

So from a tactical perspective, giving people spaces and opportunities to process their anxieties moves them along that decision-making process a little faster, and people don't have to wait so long until they feel comfortable to jump in.

So from a tactical place, we need to do it so that people can jump in as soon as possible and start their adaptation process.

One of the analogies that I use with people is, I mean, if you think about just the internet, just the evolution of the internet, and think about when you got your first email and you started to use email, it took some time for you to develop that email, to develop a certain fluency with sending and receiving emails.

But then after a while, you were engaging in this new form of communication, fluidly.

So really helping people kind of process their anxieties so they can start to use the tools.

But I think more importantly is that the more people wait or delay them entering into this new digital exponential phase, it's not that they will be left behind because that triggers more anxiety.

But that the sooner you get in, the more space you give yourself to actually feel comfortable with the variety of tools that will continue to come.

So that's part of what I tell folks.

And then also, which is a question I have for you two in this conversation, is just the ethical parts of this.

Part of the fear around it is, are we going to lose control?

Is there too much copyright infringement?

What about the consumption of energy that is going to increase climate change?

Those other aspects of AI that are oftentimes hidden to us.

One of the things that caught me when we first.

Talked was the seeming contradiction, or perhaps the first time I heard it, I thought it was a little bit of a contradiction, because you're approaching the world to seek and advocate and educate people on mindfulness, contemplative leadership, healing-centered education, and then putting that together in today's world of AI, which is around speed, fast, get things done, reduce people's work, you know, the fear of people getting left behind and loss of jobs.

Like, there's something about the...

it feels like the pace or the embrace of humanity.

And so when you talk about contemplative leadership together with what is now this, like, crazy noise of AI, it did make me feel a bit like we're sort of kindred souls in that, in that we're not, you know, 100% optimists about this at all.

Every workshop we run, we start with a question of what's your excitement to fear ratio?

Because we want to understand and invite people to have both, you know, excitement and fear.

But we have this, we do, we have talked about, is it possible for AI to help us be more mindful?

It's sort of a question.

And I guess that's what's, what was sort of that attraction moment at first, was like the beautiful things that you seek to amplify in humanity seem to be at odds with the way that the Silicon Valley types are trying to make this technology come into our lives.

And I guess I'm not sure there's really a question there.

It's more of a statement and an invitation to reflect on how you find, how you found or look for that source of beauty in using this technology when that might feel foreign to many.

You know, you're seeing it as this task enhancer that's maybe going to reduce their job, and you're inviting people for the rest of us to see that there might be something more beautiful in it.

Yeah, thank you for that invitation.

And if I may go back to the previous question briefly around why it's important to help people process their fear, one thing that I left out was that when we have too much accumulated fear around a particular topic, our sympathetic nervous system is overactive.

So when we're talking about AI or trying to engage in learning in it, our body literally feels threatened.

And could you imagine the quality of your learning if consciously and subconsciously you feel threatened?

So I just wanted to make that point that the more we help people process their fear around AI, the more they can step into learning experiences with AI that activate the opposite, the parasympathetic nervous system, and they can actually learn more.

So that's part of how I deploy contemplative science into this space of AI is knowing what I know about neuroscience, knowing what I know about the nervous system, to slow down in order to think about how we can adapt to this exponential moment in a way that meets our hardware of our brains and our bodies.

Because that's a big contradiction now, not just culturally and economically, where the goal is to make things more efficient, outsource more and more labor, but also we're talking about adapting to a technology that is almost in contradiction to how the human organism works.

We need to rest.

We have our circadian rhythms.

So contemplative science allows us to understand how we can slow down through mindfulness to actually keep up.

So there is a way to slow down and to regulate our nervous system when it comes to these exponential technologies so that we can almost jump with the horse, jump on the horse at the right moment, and be able to move.

So for example, it's somewhat like holding paradox.

I can both at the same time slow down, slow down my nervous system, engage in practices that allow me to be fully embodied and somatically aware, and at the same time have an incredibly busy schedule that requires me to do all the things from fatherhood to business leader, et cetera.

But that it's not that I need to stop everything, but that I need to slow down enough to find alignment across the different responsibilities that I have.

And I think that's the same with AI, that when we leverage contemplative practices, mindfulness, and a healing-centered approach to AI, you can kind of really sense into like, what's the best way for me to use these tools?

What's the best way for my community to engage in these exponential technologies?

So for example, one of the ways that I've found AI to be incredibly helpful is offloading some of the cognitive load that's required to hold massive amounts of data.

So I'll give you one simple example.

You know, I trained a ChatGPT, custom GPT, to hold my 20-year vision for my organization.

Just think about that.

Just hold the 20-year per quarter across the next 20 years.

Just kind of dump that data in, even though things will change, and to be able to, at moment, engage that AI and ask questions to remind me of where I want to be 5, 10, 20 years from now and ask it particular questions around what I'm doing now, what I would like to do, and to have these philosophical discussions around what do I need to be doing or thinking about to do this or to do that.

It creates space for me that I would not otherwise be able to have.

So whether that's healing centered or not, you could, maybe not, but the space that it opens up for me, I'm able to be a lot more present in the work with my team, with my community.

One of the thoughts that I'm having and talking with you is we don't have the right language or mental models for, I think in many respects, sort of healthy adoption of AI at a mass level across everything.

That the language is one of the digital economy more than it is of anything else.

What are you finding in your work about mindfulness contemplative practice and the other stuff that you do that where people can use different language around how they use these tools that give them access to different mental models, that give them access to different intuitions other than like productivity or speed or efficiency or getting smarter with AI or being more competitive and better than someone who doesn't use AI, have you found places to go that are different places where there's a different way of talking about using something like Chachiputti that just changes the nature of the discussion, changes the mental model for even how they might use them?

Yeah, it's a great question.

There is a few places, if you haven't heard of James Bridal, James Bridal and his notion of ecological technology is I think by far one of the most powerful ways to think about AI.

The way that James talks about thinking about AI, not just as these large language models that are able to scrape all this data and statistically respond to certain questions, but also how that technology is connected to the Earth and how we're extracting precious minerals to help them mine.

But more importantly, thinking about relating to AI as an extension of nature, so that yes, these tools have this powerful capacities, but that they also can help us think about planetary consciousness and planetary intelligence.

I mean, this is kind of out there, but that's really...

When you take and frame it in that way, that artificial intelligence is the most powerful opportunity for us to rethink how we relate to intelligence, period, both artificial and animal and human intelligence.

That puts it in different terms.

That puts it in different terms.

So for me, that's how I think about it, especially with James' help of reframing it.

I'm actually not necessarily trying to be more efficient or productive, is I'm trying to expand the dynamics of what it means to be intelligent in a multi-species way.

Yeah, I totally relate to that.

I mean, that's part of...

When people say, why do you have complexity and complexity science as part of AI?

And to me, it just sort of is kind of obvious, but then I'm reminded that it's actually not obvious.

That it's part of understanding that there's a totally...

There's so many different ways that AI can take us to other spaces of what it means to be intelligent.

Because it's not like us, it's totally alien.

It's a completely different kind of intelligence.

We don't see that.

Most people, we're prevented from seeing it because the discourse is about artificial general intelligence and about beating humans and benchmarks with humans.

But to me, that's just missing the point.

And it also raises so much anxiety that doesn't need to be there.

You have to spend so much time explaining what a different morphological state would be for an intelligence.

What does collective intelligence mean with machines?

What is some sort of collective consciousness?

How do we even conceive of what that might look like without even getting to a conversation about whether a machine is conscious or not?

You don't even have to go to that level.

But I agree with you.

His book was a wonderful journey in different ways of thinking about critiquing the way to think about AI.

And we'll put in the show notes because he did do a great interview with Emergence magazine.

It's shorter than the book.

The interview with Emergence magazine, Ecological Technology, is an incredible conversation.

I'd love to shift and ask your opinion, your view on AI as part of education.

You know, are we, in our work with, mostly in higher education, with colleges, universities, faculty, students, there is everything from some level of excitement through to existential crisis of that these tools will prevent us from learning at all, and faculty having a lot of trouble finding an optimistic path of how to re-craft what they do with these tools.

And I'm curious how you think about that, and what you would say to teachers and educators.

Yeah, you know, I think it's important to frame the history of education in the US in particular, but also just more broadly in our global system.

It's been always due for a redesign.

There has been classic debates since the beginning of mass public education around what should we teach.

It's been a very long-standing debate.

You want to acknowledge the development of compulsory mass education in the United States in particular in the late 19th century, when the United States and several in many other countries, but specifically the US, were beginning to experience the influx of many migrants from Europe.

You had to think about that.

You had all these people coming, and the country wrestling, like how do we teach these?

How do we train them?

How do we train their children?

So public education, compulsory, mass public education, and it's around 250, 170 year period, has required some kind of reform.

And you've seen it happen, especially in private schools, the ways that private schools give students a lot more space to discover their interests, a lot more opportunities to learn by doing.

So this is kind of a learn by doing moment where the fundamental ways that we perceived getting an education, they don't necessarily work anymore.

A prompt, learning how to design a prompt now has more material and financial consequence than taking a one-on-one course in core general requirement at a university.

And so it's a bit of a balance, like we want to honor the fundamental building blocks of a good, good, solid liberal arts higher education.

I am a benefactor of that.

I love it.

I love it.

I love it.

I love it.

And at the same time, with the ways in which these AI tools will allow us to explore education deeper and then use that education for material gain in a faster way, the system has to adapt.

And then cheating and plagiarism is a very real thing.

Students and people in general will leverage these tools to falsely present and demonstrate what they know.

That is true.

So in that regard, which I think I sense is one of the biggest anxieties in higher ed, is kind of really thinking methodically around how do you design assignments that account for the use of exponential technologies of AI, right?

So one of the ways is in integrating it.

So you can use AI for this assignment, you can use AI for this task, but like in math, demonstrate how you used it, demonstrate how you use it, demonstrate the prompt, right?

So there is a way for us to tinker around the use of AI in education.

We just got to do a little bit more thinking around it.

I firmly think that you could position AI in the classroom in a way that would support students to feel more comfortable in their exploration of subject matter.

I think that's possible.

I also think it's possible for you to integrate AI as a companion to a faculty member to offload certain tasks that are bureaucratic and to enable the faculty member to spend more time doing the task of teaching.

You'd be surprised how much time a faculty member spends on non-teaching and non-educating related tasks.

So by the time that a professor is in the classroom or doing anything related to student relationship, he or she or they are exhausted.

So there are ways that we can leverage AI to carry the cognitive load of tasks that could otherwise give the professor and teachers more time to spend in the relationship with students.

Because if anything improves the educational outcome of any community of any students is the quality of the relationship between student and teacher.

So there's a lot there.

I'm still thinking about it myself, and it's overwhelming.

But it's such a juicy, juicy question.

I'm glad you say it's overwhelming.

Because I think that, you know, I do see a lot of people wanting sort of an instant answer here.

And it's much more than just how, I mean, if you read documents from Microsoft, and it'll be, well, you know, it's about teaching critical thinking through AI.

And I'm like, well, hang on a minute.

You know, there's just got to be so much more than that.

I'm really interested in how you think about flatness or sameness or the use of a single model or a single model that's been, that gives similar answers.

And how you think about introducing more diversity or nudging these tools into unusual places themselves so that there's a way to be more creative without everyone coming up with the same answer.

Yeah, that's why the benefit of some of the custom GPs within chat or within Microsoft Copilot, those are the main models that have been equipped with the capacities to do that.

But I think there is an incredible level of freedom that can come with leveraging custom large language models that you can train with your own proprietary data.

And specifically where the data is not subject to use for further training or to protect your data.

I'll give you an example.

I was working with a group of faculty members at a university, at a city college, and this group of faculty received a really big grant to explore different ways to teach from a decolonial and embodied perspective.

So given that a lot of our Western educational structures have really dark histories, when it comes to marginalized communities, the whole field of anthropology was built on the backs of examining natives, quote unquote, natives.

So there's a direct connection between Western education and our colonial history.

So this school, this university got a grant to think about how do you decolonize education?

And what is that?

And what are some of the practices, ranging from teaching Indigenous languages, teaching Indigenous history, thinking about different ways to assess students, not just on grades, but assessing students' multiple intelligence, visual, auditory, kinesthetic, not this obsession with the mind and IQ, all of which come from a particular Western paradigm of prioritizing the mind and reinforcing this mind-body separate, mind-body dualist.

My mind is separate somehow from my body.

So decolonizing education and embodied learning at the core is about increasing engagement and adding more voices.

So I ended up designing and training a custom GPT, and I trained the custom GPT on publicly accessible data on decolonizing education.

How do you decolonize a syllabus?

How do you think about embodied learning from a decolonial lens?

And I found a bunch of research around how do you decolonize a geology syllabus, a history syllabus, an English syllabus.

And I trained the thing on it.

And then during the facilitation with the faculty members, I spent half the day me giving them my instruction around what I know about this work.

And then I said, all right, everybody, I'm going to introduce something very different.

We're going to work with AI.

And they were like, what?

And I pulled up this custom GPT on the screen.

And I said, this is going to be my co-facilitator for the rest of the afternoon.

Who would like to be vulnerable and share a little bit about their syllabus, their subject?

And maybe we can talk to the AI to see what they would recommend as far as decolonizing their syllabus.

And we had like four examples.

And the GPT did a pretty good job.

A pretty good job at giving some really hands-on recommendation for how to make space in your syllabus for non-Western voices, how to make space in your syllabus for student voice.

Because part of de-causing education is that the instructor is not the sole bearer and holder of wisdom, that you want to create more space for student voice.

It's more space for students to be able to have a say in terms of how we explore the subject matter.

So it was a fascinating moment.

It was the first time I used AI to co-facilitate a session.

And they were excited.

They were excited.

But I did do a little bit of work in the beginning to process the anxieties around AI.

So then when we started using it, it was a little bit more generative.

So that's how I've been trying to experiment with creating different inputs, generating different outcomes, is leveraging it to do the work of dismantling oppressive structures and approaches to education.

I could see...

Tell me if I'm off here, but I could see how using that co-facilitator created more space for you to be present in the room too.

I mean, you're not working...

Part of the cognitive load of facilitating, which we do quite a lot of, is the challenge of being present for the people in the room, but also having your mind very much on the task at hand that you're trying to take someone through.

Working through what your syllabus is, and I'm trying to come up with practical ways that you might change your syllabus, but you've offloaded some of that to the AI that you can recall, and you could be there.

Now, we do some of this because we co-facilitate, so one of us is kind of driving the conversation, and the other one's trying to think about what the task at hand is, or those kinds of things, but having a co-facilitator could be quite helpful in that way.

Well, and also, you know, thinking about it as the, there's just so much show, not tell there.

You're showing that it's possible to work with an AI co-facilitator and not expect perfection because that's not the goal.

It's how do I offload some tasks to this co-facilitator that's going to get me 40, 50% of the way there.

The classic, it's a good start.

Yeah, and I'll just say two additional things that I think you two would appreciate about the use of that particular GPT, we called her Solara.

And is one, once I trained her all the data that was publicly accessible and that I knew about decolonizing education across a variety of fields, and I played around with her around, you know, the interaction instructions so that she is a learning coach and a design, curriculum design specialist, once we got to that fine-tuning of the GPT training, I then turned to her and I asked around, you know, what she now knew about the core tenets, principles, and criteria for decolonizing education?

And, you know, she gives me a lot, and I'm like, ooh, interesting, things that I already knew that I could affirm, that, okay, she's on the right track.

And then I proceeded to, with some dialogue, and with my expertise, we designed a decolonizing your syllabus rubric.

So a rubric, you know, with tables, so that a person can look at their syllabus and look at this rubric across these dimensions and score themselves, score themselves.

In education, they love rubrics.

Anything you can use to assess and measure.

So here, with Solara, we built this decolonizing education rubric, which we used live during the live instruction with the faculty.

Now, here's a thing you need to know is, when I was facilitating, I gave the rubric to the faculty.

They loved it.

They scored themselves.

They were like, oh my God, you know, here's where I'm weak.

And then when I told them, I said to them, I was like, look, if I was a little less humble, I wouldn't tell you this, but I actually co-designed the rubric with the AI we're about to use.

Mouths dropped.


And then two, the final thing is, if artificial intelligence, based on what some scholars are saying in regards to the ways that it infringes upon copyrights, it exploits nature, it has the potential to be an unethical force, then I find it poetic justice that I could play around and create a GPT that would try to decolonize education.

That is so true.

Thank you so much for your time today.

I feel like this is, this feels to me like the prelude of a much longer conversation, but it feels like a good stopping point here.

I really appreciate the time to learn about your approaches.

I think that the way you've used GPTs yourself is pretty, it's got my mind sort of worrying and thinking about how I might do some of the same things.

And I will say that the one thing I'm really holding is your phrase that using, creating a custom GPT has given you more space.

That I really like because so much as people are saying, oh, well, you get things done and you're done with your time, you're saving time and all of that.

But I really like the vision that you, it allows you to create space, which then allows you to use that space for other things.

I think that's quite beautiful and profound.

So that's my number one takeaway from this conversation.

Yeah, thank you so much.

Such a pleasure and a privilege.

If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.

Subscribe to get Artificiality delivered to your email

Learn about our book Make Better Decisions and buy it on Amazon

Thanks to Jonathan Coulton for our music

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.