Michael Kleinman, Senior Director of Technology and Human Rights at Amnesty International USA, is back for another episode! Tune in as he unveils Amnesty's inner workings to counteract the detrimental effects of technology on social equity and the engagement with major tech corporations to advocate for significant policy shifts. Can emerging technologies truly be harnessed to uphold human rights, or are they a double-edged sword that could just as easily undermine them? Find out in our latest episode.
Can emerging technologies truly be harnessed to uphold human rights, or are they a double-edged sword that could just as easily undermine them?Â
The age-old question is currently being posed at a fever pitch thanks to Artificial Intelligence entering the chat and boasting big promises. This question is exactly why we were psyched to welcome back an Igniting Change fan fave, Michael Kleinman, Senior Director of Technology and Human Rights for Amnesty International USA back to the hot seat and pick his brilliant mind where we dive into the heart of tech’s role in human rights. We’ll tackle everything from the dark side of spyware to the battle over online freedom, and shine a light on the digital warriors fighting for a fairer world.
This is a conversation you don’t want to miss, catch the full episode below!
After his first episode back in December 2022, Michael is back for another episode of Igniting Change! This time, Michael joins us to talk about his role and Amnesty’s work to address the human rights implications of new and emerging technology.
As Senior Director of Technology and Human Rights at Amnesty International USA, Michael’s role has three main focuses:Â
But that’s not the only impressive experience he has!
Michael started as a humanitarian and development aid worker in Afghanistan, moving to East and Central Africa, and then to Iraq. Afterwards, he worked at a foundation funding peacebuilding initiatives in East and West Africa.
Then, he founded a company that helped the United Nations run large-scale mobile phone surveys to help get better feedback from communities in conflict-affected countries.
This led Michael to work for Amnesty where he transitioned into the tech space.
The through line [of my career], more than anything else, is a curiosity about what I see as some of the main and major issues of our time
As the world’s leading human rights organisation, Amnesty International is trying to make big companies live up to their responsibilities under international human rights laws. Amnesty International is also pushing for effective regulation of emerging technologies. A great example of this kind of regulation is the EU AI Act.
The main successes that Michael talks about are:
Quantifiable research
Amnesty’s research means that the lived experiences of individuals are quantified. This shows the scale of issues and links them back to specific choices that companies have made, enabling action and improvements to be made.
Reproductive rights online content in the US
This work focuses on helping to quantify how some social media platforms are suppressing content about reproductive rights. The suppression of this content makes it increasingly difficult for people to access this information when needed, but Michael mentions that progress is being made.
Michael explains how there is no aspect of life where our actions lead to purely intentional positive consequences, as what we want to do as individuals might not be what someone else wants to do.
Everything is a trade-off especially when it comes to making technology more rights-responding.
The two ideas that there are negatives and positives can and must co-exist.
We can never have our cake and eat it too. We can never design a technology which does exactly what we want it to do without any negative consequences
So, before we end this blog on a positive, what are the main concerns or areas that Amnesty International wants to explore regarding AI?
Michael delved into each of the following areas:
Algorithmic bias
The massive amounts of data used to train AI reflects the biases of the institutions and organisations that have decided that the data is important and must be collected. Without monitoring the data we use, it reflects the racism, misogyny, and all of the inequities that we see in our cultures.
Misinformation and Disinformation
In particular, generative AI means it is much easier to deliberately produce misleading information at scale. It is going to become increasingly difficult to trust and understand who generated the content we see and what purpose it has. But how can we address this without undermining the right to freedom of expression?
If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer
Hannah Arendt, German-American Historian, and Philosopher
Surveillance
Where it used to be unimaginable to go through the masses of data generated by social media and the internet, it is becoming increasingly easy to use generative technologies to analyse and summarise the content. Now you don’t even need a human analyst. This reliance on an algorithmic system can be problematic when used incorrectly.
Personal Autonomy
As AI systems get more and more powerful, we will most likely rely on them as personal assistants. However, as we integrate AI more and more into our lifestyles and careers, it is incredibly important to understand the biases that these systems have.
But it’s not all doom and gloom – there are incredible life-saving and life-changing benefits to AI, to find out more – tune in below to hear and watch the full episode with Michael to learn more and be inspired by the incredible work of Amnesty International.
For more things tech for good, stay tuned for our latest blog posts or shout us a holla to get in touch with us, we would love to hear from you 💚
Published on March 28, 2024, last updated on March 28, 2024