00:20:48 Diana Tosca: https://www.wired.com/2014/06/everything-you-need-to-know-about-facebooks-manipulative-experiment/ 00:22:24 Orit Shaer: https://www.computer.org/csdl/magazine/ex/2020/04/09179098/1mDpzA9vc1W 00:25:40 Orit Shaer: Questions? Feel free to put them in the chat: 00:28:16 Diana Tosca: I’ve been thinking about filter bubbles w/ news in the last couple of years, and how social media works to reinforce them. What are your thoughts on this? Is that something we as consumers should work to combat or is that something that social media should regulate themselves? 00:33:45 Orit Shaer: Questions? 00:34:07 Shayla Zamora: Is it the content creators’ responsibility to monitor the chat rather than companies/platforms? Do you believe companies should also have an active role in addressing misinformation? 00:34:11 Lori B --- Hopper T Dog sent me...: What do you see for the future of Social Media? Is it just more/different platforms? Improvements? 00:34:18 Peter Mawhorter (he/him): Have you seen any efforts to build decentralized social media platforms, and do you think such efforts could be one route towards less-bad social media platforms? 00:37:42 Hunter Clary: Ugh. The worst. 00:39:22 Hunter Clary: HBO Max 00:39:31 Diana Tosca: https://www.hbo.com/documentaries/fake-famous 00:40:44 Alberta Ansah: What do you think we can do to equip social media users, especially the older generation to be more concious of their posts and the data they share? 00:41:35 Diana Tosca: ^Especially when they share photos of their children, for example, before the child can be conscious of the data implications 00:44:09 Deirdre Kelliher (she/her): The idea of a 3rd party filtering tool that shows you happier/healthier content is really interesting and exciting! I'm curious on the flip side of that, do you think there's a role for 3rd party moderation (human or AI) in parts of social media that are more negative or polarizing? What do you think would be most important in making that moderation effective? 00:44:50 Diana Tosca: https://press.princeton.edu/books/hardcover/9780691203423/breaking-the-social-media-prism 00:50:31 Eni Mustafaraj (she/her): https://www.washingtonpost.com/technology/2021/03/14/facebook-vaccine-hesistancy-qanon/ 00:54:41 Eni Mustafaraj (she/her): Right, for example, Mastodon: https://en.wikipedia.org/wiki/Mastodon_(software) 00:58:29 Eni Mustafaraj (she/her): Apropos paying content creators: https://thehypothesis.substack.com/p/heres-why-substacks-scam-worked-so. Annalee Newitz is an author who writes for free on Substack, but is upset with the policy of secret payment. 01:00:19 Deirdre Kelliher (she/her): On the topic of having social media platforms filter out fake news and malicious content/users, how do you think that would be affected by the growing mistrust in traditional authorities on truth and the growth of conspiracy mindsets? How can we navigate that and potentially re-establish trust? 01:00:34 Eni Mustafaraj (she/her): @Jen, does your TitTok label you as a paid content creator? 01:02:57 Hunter Clary: Twitter did that to me yestercay before sharing a story link I didn’t click on. 01:04:56 Peter Mawhorter (he/him): (I don’t think that these tools are complete, nor are they necessarily very usable for the average person, but EFF has a suite of privacy-related tools: https://www.eff.org/pages/tools) 01:06:45 Peter Mawhorter (he/him): I feel like a lot of this mistrust is really a case of “sometimes you can’t have nice things” because institutions have given people legitimate reasons not to trust them, even though it’s certainly been amplified by bad actors...