Inspiring women in AI: Dr. Gry Hasselbalch

Dr. Gry Hasselbalch is the author of several books, including Human Power: Seven Traits for the Politics of the AI Machine Age

She has worked for 20 years on the geopolitics of technology, data, the internet, and AI, and is a co-founder of the think tank dataethics.eu.

Use AI for tasks where you don't lose your own power...

She was a member of the EU's high-level group on AI, where she contributed to writing the ethical requirements that have now been translated into the AI Act.

In this interview (including video clips) for our inspiring women in AI series, she discusses her work and gives advice for overcoming challenges when working in the field.

Image of Gry Hasselbalch

"The global AI race, driven by profit, affects how large language models (LLMs) are built"

What AI-related projects are you currently working on?

My new book is about human power, our human identity, and the role of not only AI but technology in general.

I'm looking at technology politics with a focus on AI, and from a humanistic perspective, describing seven traits of human power.

Cover of "Human Power"

These traits are currently being defined by tech gurus and big tech, but if we look at them from the humanistic tradition, social sciences, or the arts, there's a much more nuanced understanding.

Right now, I'm very interested in the global power dynamics and geopolitics of AI. We have these big AI systems dominating the discourse, with business and political interests behind them. Suddenly, we have the Chinese model [DeepSeek] that comes out and is just as good as a ChatGPT. It’s a global AI race.

I'm interested in how this race influences the values we build into these technologies.

In the EU, AI policies are based on what's referred to as the human-centric approach to AI, defined by law and ethical principles, considering both humanity and the environment. It's important to understand the pressure of the AI global race to the top on an approach as such, where at the moment, the biggest and most dominant models in this race have huge flaws in terms of ethics and legal implications. For example, things like copyright and privacy infringement and censorship.

Recently, I've also been looking at AI's impact on human language and cultural diversity.

The global AI race, driven by profit, affects how LLMs are built. For example, they primarily support the most dominant spoken languages in the world, so there is the chance of leaving smaller, less profitable languages behind.

What inspired you to pursue a career in AI and related fields?

For the last 20 years, I worked on technology politics, human rights, and the ethics of new digital technologies.

Around the time in Europe when they started discussing data protection regulations, I was focusing on data and the implications of the big data economy.

AI added this extra layer of complexity to the big data economy, so my work evolved into a focus on AI and the politics of AI – the geopolitics, the power dynamics, the ethics, the social implications and so on.

I have a humanistic background. From the very beginning, it was a question of understanding the role of technology in terms of our human identity and what makes us human. We've been dependent on big tech digital infrastructures for a very long time. I was looking at how it was transforming the human identity.

What recent or potential breakthroughs in AI are you most excited about?

AI's hidden costs: Privacy, copyright, and identity (view transcript)

I'm more worried than excited at the moment.

I do see some exciting breakthroughs, but that's mostly projects from minority communities or ecosystems that are not part of the global AI race.

For instance, developments in terms of AI for health, AI to support the sustainable development goals, and things like that.

They are the areas I would put most of my faith in, but that's not what we hear about in the public discourse.

What potential risks or downsides of AI development concern you?

One I'm concerned about, especially now that we have a very uncertain political situation, is the role of autonomous weapons and AI used in wartime. Those are the moments that have the biggest ethical challenges.

We have ethical "norms and rules" regarding how we have to behave and what we can do. But remember, in wartime, these norms are suddenly put aside. You can actually kill another human being. Ethical norms basically transform and can even be set aside in moments of war and crisis.

We saw this during the pandemic, for example. A lot of AI systems were suddenly developed and adopted very quickly. But they were flawed and ethically problematic as there were no time to make proper ethics impact assessments. These systems would for example, decide who would get the respirator and who would get access to medicine.

When you are in a crisis situation, some ethical barriers and ethical assessments are not considered important. This can cause us to make big mistakes in terms of how we deploy and develop AI.

What challenges have you faced as a woman in the AI field, and how have you overcome them?

As a woman, you just learn to deal with gender bias in society in general. But in computer science, it's a field known for being biased.

If you look at the history of computers, like the first ENIAC [Electronic Numerical Integrator and Computer] launched in 1946, the women who programmed it were forgotten for many years. We knew the names of the two guys who built the physical computer, but it took about 40 years to recognize the women in those black-and-white pictures as the programmers.

Even today, if you look at AI teams at companies like Meta and Google, surveys show that only about 10 or 15% of the developers are women. So, it's the same.

For me, combining my female identity with my humanistic perspective on AI, which isn't very popular in the dominant tech-focused field, makes it even more challenging. The tech solutionist approach to AI, where AI is seen as the solution to everything, doesn't always welcome more abstract and what is considered “touchy-feely” ideas.

But there are openings in some areas. I remember being invited some years ago to a meeting with a U.S. commissioner who was really interested in my last book.

He wanted to know how to translate love and compassion into politics, which is a concept in my book based on philosophical ideas. He seriously wanted to change his approach and perspective.

I've also interviewed technology politicians, and I often realize that those who have personal histories of repression or persecution are more open to diverse perspectives.

There are emerging spaces for people from minority cultures or with minority experiences. Even though I'm still a white Danish privileged person, I can also feel the change. There's room for more diversity in the field.

What initiatives or changes would you like to see to encourage more women to enter the field of AI?

I think in general, when you walk into a room, you can either feel "at home" or not "at home."

I've been in this field for 20 years, so I've learned to ignore the places where I don't feel "at home." It's always more complicated and difficult to emphasize your views in those settings, but in time, I’ve learned to exist in those spaces. I had to.

Conferences and event organizers need to get better at choosing diverse panels, not just in terms of gender balance but all kinds of balances.

The excuse that there aren't enough women isn't correct. For example, there's a wonderful initiative called 100 Women in AI Ethics, which is a directory of brilliant women working in AI ethics from all over the world.

Another area is the media. Even in Denmark, which is supposed to be non-gender biased, the most cited experts in Danish media are mostly men. There were no women in the top 20 of a recent list.

We need to be really aware of who is represented in public situations, debates, and media.

Human skills first: Preparing for an AI-driven world (view transcript)

What advice would you give to young women considering a career in AI?

Men are really good at creating alliances and little communities where they support each other.

I think women could be a lot better at that. If you're a young person wanting to start working in AI, try to reach out to women who have been in the field for a while and ask for mentorship and support.

And if you're a woman who has worked in the field for a while, you should reach out to younger women, support them, and help out.

We need to be better at creating alliances and communities.

So, the best advice I can give is to reach out to other women and seek help when you need it.

What advice would you give to other women for getting started with using AI in their research, work, or life?

The first step is to be very cautious because you can make so many mistakes.

For example, with generative AI systems that help with writing, it's easy to start relying on them too much. Humans are fundamentally lazy, or we get tired and stressed with our work, so it's tempting to use AI to write our texts or take over tasks.

The most important part is to make a strategy from the beginning. For instance, you might use a chatbot to help with a budget if you're not good at math, but you want to do the entire writing process yourself for an article. It's not just for ethical or moral reasons but because if you use AI for all writing tasks, you lose practice and de-skill yourself. In the future, we might have many people good at writing prompts but fewer skilled in creative writing.

We need to take care of not de-skilling ourselves. Make choices about how you want to use AI and be aware of the data you share with it. Use AI for tasks where you don't lose your own power.