30s Ad: $16 - $19
60s Ad: $19 - $22
CPM Category: Technology
Different podcast categories command different CPM (cost per mille) rates based on advertiser demand and audience value.
Presenting timely conversations about the purpose and power of technology that bridge our interdisciplinary research with broader public conversations about the societal implications of data and automation.
For more information, visit datasociety.net.
Presenting timely conversations about the purpose and power of technology that bridge our interdisciplinary research with broader public conversations about the societal implications of data and automation.
For more information, visit datasociety.net.
Producers, Hosts, and Production Team
Searching
Searching for producer information... This may take a moment.
Producers, Hosts, and Production Team
No producer information available yet. Click "Find producers" to search for the production team.
Emails, Phones, and Addresses
Contact Page Emails
Emails listed specifically on the website's official contact page.
Emails
Phone Numbers
No phone numbers found.
Addresses
No addresses found.
Form
No form detected on this page.
General Website Emails
Emails found on general website pages (e.g., about, info), not the main contact page.
No website emails found.
Externally Sourced Emails
Emails discovered using automated web scraping across the internet.
No external emails found.
RSS Emails
Email addresses extracted directly from the website's or podcast's RSS feed(s).
Here's a quick summary of the last 5 episodes on Data & Society.
Hosts
Maia Woluchem
Alix Dunn
Alix
Previous Guests
Anita Say Chan
Anita Say Chan is an author and scholar known for her work on the intersections of technology, data, and social justice. She is the author of 'Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future', which explores the historical and contemporary implications of data practices in the tech industry, particularly in relation to marginalized communities.
Anita Say Chan is an author and scholar known for her work on the intersections of technology, data, and social justice. She is the author of 'Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future', which explores the historical and contemporary implications of data practices in the tech industry, particularly in relation to marginalized communities.
mile P. Torres
mile P. Torres is a researcher and advocate focused on the societal impacts of technology and data. He has extensively studied the second wave of eugenics and its manifestations in modern tech ideologies, contributing to discussions on the ethical implications of data use in society.
mile P. Torres is a researcher and advocate focused on the societal impacts of technology and data. He has extensively studied the second wave of eugenics and its manifestations in modern tech ideologies, contributing to discussions on the ethical implications of data use in society.
Timnit Gebru
Timnit Gebru is a prominent computer scientist and advocate for diversity in technology. She is known for her research on algorithmic bias and the ethical implications of artificial intelligence. Gebru has been a vocal critic of the tech industry's practices regarding data and has contributed significantly to the discourse on responsible AI.
Timnit Gebru is a prominent computer scientist and advocate for diversity in technology. She is known for her research on algorithmic bias and the ethical implications of artificial intelligence. Gebru has been a vocal critic of the tech industry's practices regarding data and has contributed significantly to the discourse on responsible AI.
Aiha Nguyen
Aiha Nguyen is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is the author of The Constant Boss: Work Under Digital Surveillance and co-author of At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers, and Generative AI and Labor: Power, Hype and Value at Work.
Aiha Nguyen is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is the author of The Constant Boss: Work Under Digital Surveillance and co-author of At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers, and Generative AI and Labor: Power, Hype and Value at Work.
Alexandra Mateescu
Alexandra Mateescu is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.
Alexandra Mateescu is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.
Nia Johnson
Nia Johnson is an academic and practitioner focused on the intersection of technology and social justice. She has worked on various projects that examine how digital infrastructures impact marginalized communities and seeks to promote equitable access to technology.
Nia Johnson is an academic and practitioner focused on the intersection of technology and social justice. She has worked on various projects that examine how digital infrastructures impact marginalized communities and seeks to promote equitable access to technology.
Ekene Ijeoma
Ekene Ijeoma is an artist and researcher known for his work that explores the social implications of technology and data. He has been involved in projects that address issues of representation and equity in digital spaces, often using art as a medium to provoke thought and discussion.
Ekene Ijeoma is an artist and researcher known for his work that explores the social implications of technology and data. He has been involved in projects that address issues of representation and equity in digital spaces, often using art as a medium to provoke thought and discussion.
Lori Regattieri
Lori Regattieri is an academic and artist whose work focuses on the impact of digital infrastructures on society. She engages in interdisciplinary research that examines the relationship between technology, culture, and community, aiming to highlight the narratives that emerge from these interactions.
Lori Regattieri is an academic and artist whose work focuses on the impact of digital infrastructures on society. She engages in interdisciplinary research that examines the relationship between technology, culture, and community, aiming to highlight the narratives that emerge from these interactions.
Lama Ahmad
Lama Ahmad is a researcher and expert in the field of artificial intelligence and its societal implications. She focuses on the intersection of technology and public policy, particularly in the context of generative AI and its governance.
Lama Ahmad is a researcher and expert in the field of artificial intelligence and its societal implications. She focuses on the intersection of technology and public policy, particularly in the context of generative AI and its governance.
Camille Franois
Camille Franois is a prominent figure in AI policy and governance. She has extensive experience in developing frameworks for responsible AI use and is known for her work on open-source infrastructure and global policy related to technology.
Camille Franois is a prominent figure in AI policy and governance. She has extensive experience in developing frameworks for responsible AI use and is known for her work on open-source infrastructure and global policy related to technology.
Tarleton Gillespie
Tarleton Gillespie is a scholar and author specializing in the social implications of digital media and technology. He has contributed significantly to discussions on labor, content moderation, and the governance of AI systems.
Tarleton Gillespie is a scholar and author specializing in the social implications of digital media and technology. He has contributed significantly to discussions on labor, content moderation, and the governance of AI systems.
Briana Vecchione
Briana Vecchione is an advocate for accountability and participation in technology governance. She works on initiatives that promote community involvement in the evaluation and oversight of AI systems.
Briana Vecchione is an advocate for accountability and participation in technology governance. She works on initiatives that promote community involvement in the evaluation and oversight of AI systems.
Borhane Blili-Hamelin
Borhane Blili-Hamelin is a researcher focused on AI risk and vulnerability. He is involved in projects that aim to empower communities to recognize and manage harmful flaws in AI technologies.
Borhane Blili-Hamelin is a researcher focused on AI risk and vulnerability. He is involved in projects that aim to empower communities to recognize and manage harmful flaws in AI technologies.
Brian Chen
No additional bio available.
Topics Discussed
predatory data
eugenics
big tech
datafication
TESCREAL bundle
racial divisions
political violence
generative AI
labor
automation
worker advocacy
digital surveillance
caregiving
algorithmic power
digital infrastructures
land disputes
climate effects
social fabrics
myths of progress
transformation
speculation
technological infrastructures
local communities
narratives of process
power
change
futurity
red-teaming
public interest
AI governance
evaluation
community engagement
semiconductors
chip production
Taiwan
US economic policy
colonial infrastructures
economic justice
tech governance
At the turn of the 20th century, the anti-immigration and eugenics movements used data about marginalized people to fuel racial divisions and political violence under the guise of streamlining society toward the future. Today, as the tech industry champions itself as a global leader of progress and innovation, we are falling into the same trap.
On April 10th, Anita Say Chan, author of Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future(UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem. Predatory Data is the first book to draw this direct line between the datafication and prediction techniques of past eugenicists and today’s often violent and extractive “big data” regimes. Torres and Gebru have also extensively studied the second wave of eugenics, identifying a suite of tech-utopian ideologies they call the TESCREAL bundle.
Purchase your own copy of Anita Say Chan’s book Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future: https://bookshop.org/a/14284/9780520402843.
At the turn of the 20th century, the anti-immigration and eugenics movements used data about marginalized people to fuel racial divisions and political violence under the guise of streamlining society toward the future. Today, as the tech industry champions itself as a global leader of progress and innovation, we are falling into the same trap.
On April 10th, Anita Say Chan, author of Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future(UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem. Predatory Data is the first book to draw this direct line between the datafication and prediction techniques of past eugenicists and today’s often violent and extractive “big data” regimes. Torres and Gebru have also extensively studied the second wave of eugenics, identifying a suite of tech-utopian ideologies they call the TESCREAL bundle.
Purchase your own copy of Anita Say Chan’s book Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future: https://bookshop.org/a/14284/9780520402843.
generative AIlaborautomationworker advocacydigital surveillancecaregivingalgorithmic power
Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?
In this episode of Computer Says Maybe, host Alix Dunn speaks with Data & Society researchers Aiha Nguyen and Alexandra Mateescu, authors of the primer Generative AI and Labor: Power, Hype, and Value at Work. They discuss how automation is now being used as a threat against workers, and how certain types of labor are being devalued by AI — especially traditionally feminized work like caregiving.
Aiha Nguyen is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.
Alexandra Mateescu is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.
Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!
Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?
In this episode of Computer Says Maybe, host Alix Dunn speaks with Data & Society researchers Aiha Nguyen and Alexandra Mateescu, authors of the primer Generative AI and Labor: Power, Hype, and Value at Work. They discuss how automation is now being used as a threat against workers, and how certain types of labor are being devalued by AI — especially traditionally feminized work like caregiving.
Aiha Nguyen is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.
Alexandra Mateescu is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.
Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!
0:001:02:06
Connective (t)Issues: Stories of Digitality, Infrastructures, and Resistance | Public Panel
Hosts
Hosts of this podcast episode
Maia Woluchem
Guests
Guests of this podcast episode
Nia JohnsonEkene IjeomaLori Regattieri
Keywords
Keywords of this podcast episode
digital infrastructuresland disputesclimate effectssocial fabricsmyths of progresstransformationspeculationtechnological infrastructureslocal communitiesnarratives of processpowerchangefuturity
Physical and digital infrastructures have raised tensions around the world, seeding land disputes, climate effects, and disrupting social fabrics. Yet they are also intertwined with myths of progress, transformation, and speculation. To explore these themes, we were joined by Nia Johnson, Ekene Ijeoma, and Lori Regattieri — academics, practitioners, and artists who are each, in their own way, responding to the ways digital infrastructures are transforming the built, natural, and social environments. In a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem, we broke down confrontations between technological infrastructures and local communities and discussed how to reshape narratives of process, power, change, and futurity.
This public panel is part of Connective (t)Issues, a Data & Society workshop organized by the Trustworthy Infrastructures program in partnership with Duke Science & Society. Learn more about the workshop at datasociety.net.
Physical and digital infrastructures have raised tensions around the world, seeding land disputes, climate effects, and disrupting social fabrics. Yet they are also intertwined with myths of progress, transformation, and speculation. To explore these themes, we were joined by Nia Johnson, Ekene Ijeoma, and Lori Regattieri — academics, practitioners, and artists who are each, in their own way, responding to the ways digital infrastructures are transforming the built, natural, and social environments. In a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem, we broke down confrontations between technological infrastructures and local communities and discussed how to reshape narratives of process, power, change, and futurity.
This public panel is part of Connective (t)Issues, a Data & Society workshop organized by the Trustworthy Infrastructures program in partnership with Duke Science & Society. Learn more about the workshop at datasociety.net.
What exactly is generative AI (genAI) red-teaming? What strategies and standards should guide its implementation? And how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examined red-teaming’s place in the evolving landscape of genAI evaluation and governance.
Our discussion drew on a new report by Data & Society (D&S) and AI Risk and Vulnerability Alliance (ARVA), a nonprofit that aims to empower communities to recognize, diagnose, and manage harmful flaws in AI. The report, Red-Teaming in the Public Interest, investigates how red-teaming methods are being adapted to confront uncertainty about flaws in systems and to encourage public engagement with the evaluation and oversight of genAI systems. Red-teaming offers a flexible approach to uncovering a wide range of problems with genAI models. It also offers new opportunities for incorporating diverse communities into AI governance practices.
Ultimately, we hope this report and discussion present a vision of red-teaming as an area of public interest sociotechnical experimentation.
10:23 Lama Ahmad on The Value of Human Red-Teaming
17:37 Tarleton Gillespie on Labor and Content Moderation Antecedents
25:03 Briana Vecchione on Participation & Accountability
28:25 Camille François on Global Policy and Open-source Infrastructure
35:09 Questions and Answers
56:39 Final Takeaways
What exactly is generative AI (genAI) red-teaming? What strategies and standards should guide its implementation? And how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examined red-teaming’s place in the evolving landscape of genAI evaluation and governance.
Our discussion drew on a new report by Data & Society (D&S) and AI Risk and Vulnerability Alliance (ARVA), a nonprofit that aims to empower communities to recognize, diagnose, and manage harmful flaws in AI. The report, Red-Teaming in the Public Interest, investigates how red-teaming methods are being adapted to confront uncertainty about flaws in systems and to encourage public engagement with the evaluation and oversight of genAI systems. Red-teaming offers a flexible approach to uncovering a wide range of problems with genAI models. It also offers new opportunities for incorporating diverse communities into AI governance practices.
Ultimately, we hope this report and discussion present a vision of red-teaming as an area of public interest sociotechnical experimentation.
Do you ever wonder how semiconductors (AKA chips) — the things that make up the fine tapestry of modern life — get made? And why does so much chip production bottleneck in Taiwan?
Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.
Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.
Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Do you ever wonder how semiconductors (AKA chips) — the things that make up the fine tapestry of modern life — get made? And why does so much chip production bottleneck in Taiwan?
Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.
Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.
Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Ratings
Global:
Global ratings are aggregates of the individual countries