Rethinking AI Adoption: How Associations Can Navigate AI with Purpose

Sub-Title:
In this interview, Maryrose Lyons explores key challenges facing associations – gaps in understanding, ethical risks, the role of regulation, and how smaller teams can adopt AI.
Magazine:
22nd Jul, 2025
Category:
Image:
Body:
At the BestCities Global Forum in Dublin, our Deputy Editor met Maryrose Lyons, Founder of the AI Institute, an organisation that delivers world-class AI education to teams. Her keynote focused on using AI to streamline work and make sense of emerging technologies. He later joined her AI Core Skills course, which deepened his understanding of how organisations can adopt AI responsibly.
 
In this interview, Maryrose explores key challenges facing associations and professionals: gaps in understanding, ethical risks, the role of regulation, and how smaller teams can adopt AI without compromising their values.
 

Jesús Guerrero Chacón: You founded the Institute of AI Studies in 2023 and have since trained thousands in ethical AI adoption. What knowledge or action gaps do you commonly see in international associations and professional communities?

Maryrose LyonsMaryrose Lyons: People think they are using AI well, until they join us. They have often been dabbling. Tinkering with prompts. Repurposing ChatGPT replies. Thinking that is the job done.

The truth? Prompting is just the surface. The real shift comes when you treat AI like a strategic partner, not a novelty. That is when you start getting serious results.
 
Here is what we consistently notice:
  1. Confidence without clarity: Leaders feel pressure to ‘use AI’ but lack a clear approach.
  2. Missed potential: Many teams use AI to save time, but the learnings are not shared, and they often are not willing to reimagine their workflows.
  3. Too much experimentation, not enough execution.


JGC: From ChatGPT to DeepSeek, AI tools evolve rapidly and carry hidden ethical and environmental costs. What is your advice for professionals who feel overwhelmed or hesitant to engage?

ML: It is a very real problem; in fact, ‘overwhelm’ is the number one fear that participants are reporting to us in their pre-course surveys in 2025. This has changed from ‘that AI will take my job’ in 2023 and ‘data security’ in 2024.
 
The thing to remember is: you do not need to use every tool that is released every week. There is a lot of hype, and not all the tools are released ready to go.
 
That is where we excel: we are a trusted voice in what is worth looking at now, and what is hype.


JGC: Your courses address digital trust and bias. How can organisations avoid building these biases into communications and services as AI tools become more ideologically skewed?

ML: Awareness of these biases is a major advantage we lacked in the early days of the web and social media.
 
Think back to the early 2000s: we clicked ‘accept’ on terms and conditions, uploaded our lives to social media, and handed over personal data like confetti at a wedding. We simply did not grasp how that information would be harvested, analysed, and monetised. The awareness came years later, often accompanied by uncomfortable revelations about manipulation and surveillance.
 
With AI, we are in a different position, entering this age with eyes wide open, understanding that algorithms are not neutral mathematical entities but reflect the perspectives, assumptions, and blind spots of their owners. This awareness is our superpower.
 
Diversify your AI sources. Do not rely on a single provider or model. Systems trained on varied datasets by different teams will exhibit different biases. Using multiple sources creates a kind of checks-and-balances system that can flag when one algorithm is pulling in a particular direction.
 
Build human oversight that specifically watches for bias, not just accuracy. Train your team to recognise when AI outputs feel slanted, perhaps member communications consistently emphasise certain cultural perspectives, or event recommendations systematically favour particular speakers or topics.
 
The beauty of our current awareness is that we can bake bias detection into our processes from day one, rather than discovering problems years down the line. I particularly like Anthropic (the company behind Claude) for their attention to this area.
 
Hopefully, we are not destined to repeat the data privacy mistakes of the early internet era, because we have learned those lessons and can apply that wisdom to AI governance.


JGC: Some critics say the EU AI Act may stifle innovation compared to the U.S. and China. Can regulation and responsible innovation coexist in Europe?

ML: As a European, I am delighted that the EU AI Act is in place. It makes me feel safe that my government cannot spy on me and that companies cannot manipulate with impunity.
 
The ‘race’ narrative itself needs interrogating. Yes, the US champions ‘move fast and break things,’ but this strategy becomes profoundly problematic when we are not dealing with a social media platform that might lose users, but with technologies that could reshape human society itself. Breaking things in AI means damaging livelihoods, democratic processes, or fundamental rights. The stakes are simply too high for Silicon Valley's traditional approach.
 
China presents a more complex picture than many realise. Whilst Western media often portrays Chinese AI development as unrestrained, China implements significant controls, just different ones. They regulate AI to serve state objectives and social stability, with restrictions on certain applications and mandatory algorithmic audits for recommendation systems. Their approach is not ‘no regulation’, it is regulation aligned with different values and priorities.
 
Europe’s position looks less like ‘falling behind’ and more like choosing a different finishing line entirely. Whilst the US optimises for speed and China for state power, Europe optimises for human dignity and democratic values. This creates different types of innovation, perhaps slower to market, but potentially more sustainable and trustworthy.
 
Europe is not falling behind; it is defining what responsible leadership in the AI era looks like. In my opinion, that is a race worth winning.


JGC: At the AI Action Summit in Paris, European Commission President Ursula von der Leyen announced InvestAI, a €200 billion initiative to boost Europe’s AI competitiveness. In a landscape dominated by U.S. tech giants and China’s state-backed models, can Europe truly compete? And what does digital sovereignty look like for smaller associations trying to uphold their values while adopting global tech?

ML: Europe’s €200 billion InvestAI initiative is not about beating the US and China, but redefining what winning looks like in the AI era, whilst democratising access to truly sovereign technology.
The US optimises for market dominance through venture capital and platform monopolies. China optimises for state control through massive public investment. Europe is optimising for something different: trustworthy AI that serves democratic societies, with open-source models as the great equaliser.
 
For international associations, digital sovereignty means strategic technology choices that align with your values, not just your budget.
 
Open source models may become a good option here, and in this regard, Europe’s Mistral is one of the leaders.
 
Think of open-source AI as owning your building rather than renting from a landlord who can change terms at will. When you deploy models like Mistral AI or Llama, you gain genuine control over your technological destiny. Member data processing, algorithmic decisions, and privacy safeguards remain under your jurisdiction.
 
Past technical barriers to open source for smaller organisations are dissolving. Managed services and platforms like Hugging Face make deployment accessible without requiring machine learning teams.
 
Europe is not just competing; it is creating alternatives for organisations prioritising member trust over raw efficiency, transparency over black-box performance, and democratic values over authoritarian control.


Curious how your team can build real AI fluency? 

Visit the AI Institute’s website to explore their full range of hands-on courses, designed to help associations adopt AI with clarity, confidence, and purpose. Visit www.instituteofaistudies.com
 
 

Powered by Meeting Media Company, publisher of Headquarters Magazine (HQ) – a leading international publication based in Brussels, serving the global MICE industry and association community.

Other Articles

Our Partners

About Us

Since its founding in 1992, Meeting Media Group, publisher of Headquarters Magazine (HQ), has been a trusted guide and voice for associations and the global MICE (Meetings, Incentives, Conferences, and Exhibitions) industry.