Advances in computing technology are rapidly changing the way we work. Human-computer interaction is a critical aspect of this ongoing change. The reason for this is that workers need advanced HCI to successfully interact with emerging computing devices. In this series of conversations, we are asking experts with various backgrounds to help us understand how advanced HCI will support work in the future and how it will also allow workers to balance productivity with wellbeing. While we are primarily interested in how HCI can support new ways to work, technological solutions are always embedded in societal structures. For this reason, we will strive to understand the future of work as a relationship between technology and society.
In the near term, our conversations will explore the effects of the COVID-19 crisis on work through the prism of HCI. The COVID-19 crisis resulted in a sudden and dramatic change in how we work. For many of us, the well-known mainstays of work are gone, or drastically different than they were before - this includes the eight-hour workday, the office building, the morning commute, in-person conversations with coworkers, as well as sending children to school or daycare. How can HCI support worker productivity, as well as worker wellbeing, under these new circumstances? Furthermore, how do these new circumstances provide a window into the future of work?
The HCI community has a central role to play in creating the future of work. We invite you to join our conversations with leading experts about how we can best do this.
University of Washington
linda [at] uw.edu
Linda Ng Boyle is an Associate Professor in the Department of Civil & Environmental Engineering. Dr. Boyle's research centers on driving behavior, crash countermeasures, crash and safety analysis, and statistical modeling. Dr. Boyle is an associate editor for the journal Accident Analysis and Prevention and serves on the Transportation Research Board committees on Simulation and Measurement of Vehicle and Operator Performance and Statistical Methodology in Transportation Research.
University of New Hampshire
andrew.kun [at] unh.edu
Andrew Kun is an Associate Professor at UNH, ECE Department. Andrew was the principal investigator of the Project54 effort which involved integrating embedded mobile computing equipment and wireless networking into police cruisers. Currently, a significant part of Andrew’s research is focused on driving simulator-based exploration of in-car user interfaces, and estimation methods of the drivers’ cognitive load to determine the effect of the user interface on the driving performance.
University of Wisconson
jdlee [at] engr.wisc.edu
John D Lee currently works at the Department of Industrial and Systems Engineering, University of Wisconsin–Madison. John does research in Cognitive Engineering, with a focus on human-automation interaction. He is also a co-author of the third edition of the popular introductory human factors textbook.
Harvard Business School
rsadun [at] hbs.edu
Raffaella Sadun is a Professor at Harvard Business School. Professor Sadun's research focuses on the economics of productivity, management and organizational change. Her research documents the economic and cultural determinants of managerial choices, as well as their implications for organizational performance in both the private and public sector.
Wellesley College
oshaer [at] wellesley.edu
Orit Shaer is the Class of 1966 Associate Professor of Computer Science and co-director of the Media Arts and Sciences Program at Wellesley College. She found and directs the Wellesley College Human-Computer Interaction (HCI) Lab. Her research focuses on next generation user interfaces including virtual and augmented reality, tangible, gestural, tactile, and multi touch interaction.
University of Salzburg
manfred.tscheligi [at] sbg.ac.at
Dr. Tscheligi holds a Master’s Degree in Business Informatics and a PhD in Social and Economic Science with a specialization in Applied Computer Science. Since coming to Salzburg he has been holding several positions in relation to the development of the research group on Human-Computer Interaction (e.g. (Co)-Director of the former ICT&S Center, Co-Head of the Department of Computer Sciences). His work is based mainly on the interdisciplinary synergy of different fields to enrich the interaction between humans and systems.
Thursday, May 6, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: With increasing levels of driving automation, the driving tasks and the interaction between humans and vehicles will change. In this conversation, we will look briefly at three aspects of driving automation: technological, environmental, and social. Assuming human’s engagement in driving task until full automation is there, raises the question of “how should in-vehicle UIs be designed to consider users’ and context’s state and result in safe and smooth maneuvers?” Besides drivers’ comfort and safety, automated driving can have environmental effects. Given the rising problem of climate change, it is vital to design future technologies in a way that supports sustainable mobility. And finally, road traffic is a social situation. While the decisions related to the design of technology for automated vehicles are mostly technical, they have social consequences. Therefore, there is a need for a prosocial approach in the design of technology that reflects road users’ well-being and aims for considerate and supportive behavior.
Bio: Shadan Sadeghian is a Post-Doc researcher in the department of “Ubiquitous Design / Experience and Interaction” at the University of Siegen. She studied computer science at the University of Bonn and RWTH Aachen and pursued her Ph.D. in Human-Computer Interaction at OFFIS Institute for Information Technology and the University of Oldenburg. She has also worked as a Ph.D. scholar in the Max Planck Institute for Biological Cybernetics in Tübingen and as a Post-Doc researcher at Fraunhofer institute FKIE. Her research focuses on designing interaction and user experience in automated vehicles and automated systems in production management context.
When: Thursday, April 29, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: The United States Census predicts that by 2030, one in five people will be over the age of 65, with similar trends globally. Such rapid growth of the older adult population presents opportunities and challenges for social connectedness and well-being. Opportunities include learning from older adults who are engaging online and offline in creative and unexpected ways. Challenges include supporting the social and emotional well-being of those with inequitable access to in-person and digital support networks and communities. In this conversation, I will highlight my research that offers an alternative to how we think about aging, social connectedness, and online engagement by leveraging positive aging and strengths-based frameworks. Three projects that offer insight into these topics include (1) an interview study with older adult bloggers, showing how older adults engage in online communities in ways that contrast prior work; (2) an eye-tracking study of older adults' Facebook use that pushes back on a dichotomous framing of their online engagement and prioritizing visible social media behaviors; and (3) interviews with older adults about their COVID-19 tech experiences, showing how they engage in innovative connectedness behaviors when encountering disruptions in their social spaces.
Bio: Robin Brewer is an Assistant Professor in the School of Information at the University of Michigan. She also holds a courtesy appointment in Computer Science and Engineering. Her research lies at the intersection of accessibility and social computing where she studies how older adults and disabled people engage with technology, leveraging strengths of these communities to design for creativity, expression, and agency. Dr. Brewer holds a Ph.D. in Technology and Social Behavior from Northwestern University, M.S. in Human-Centered Computing from University of Maryland, Baltimore County, and B.S. in Computer Science from the University of Maryland, College Park.
When: Thursday, April 22, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: Time and again, when technologists have imagined the future of work, they have done so without consideration of people who are blind. Look no further than the display you are currently reading—the first displays and touchscreens appeared in the 1960s and 70s, while the first screen reader to make them accessible wasn’t invented until 1986. This is not atypical; most technologies are indeed “retrofit” for accessibility, often years and decades after their first introduction. Given this, how exactly do blind people work in the 21st century? What technical barriers do they face, and to what extent are barriers technical as opposed to sociocultural? How do we break the innovate-retrofit cycle, and what role can HCI scholars and practitioners play? For the past 7 years, my research has explored these questions with blind students and collaborators, through qualitative inquiry and participatory design–an approach, I argue, that not only results in accessible technologies from the start, but that also can lead to radical innovation that improves work for all. I look forward to engaging these ideas in dialogue with you.
Bio: Stacy Branham is an Assistant Professor in Informatics at the University of California, Irvine (UCI). Her research investigates how technologies operate in social settings where one or more people has a disability, yielding actionable design guidance and proof of concept prototypes. Her recent and ongoing studies explore how technology can isolate, offend, and harm people with disabilities, as much as it has the potential to integrate and empower them when designed properly. Current themes in her work are: AI and bias; safety and wellbeing; the potential of voice assistants to universalize access; and co-design that involves people with disabilities throughout the design process. Her research has been funded by Toyota, TRX Systems Inc., Maryland Industrial Partnerships, and NSF, and her research publications have been recognized with Best Paper Awards at top-tier conferences CHI, ASSETS, and DIS. She is currently the Adjunct Chair for Accessibility on the SIGCHI Executive Committee. She received her BS and PhD in Computer Science from Virginia Tech.
When: Thursday, April 15, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: Hardly a day passes without a controversy related to technology ethics. From bias in artificial intelligence to privacy violations on social media to systems that enable online harassment, when tech companies and researchers come under fire, people wonder: why are they not thinking about potential harms? Unintended consequences of technology are a significant social issue, both with respect to the harms that can result and widespread impact on perception of the computing field. We need a cultural shift with respect to the role of ethics in computing that begins from day one of computing education and extends to everyday work practice of technologists. However, even when technologists want to do the right thing, doing so requires speculation about future harms. How might we create this cultural shift and scaffold ethical speculation among technologists? Also how might we understand real impacts of technological harms on everyone, and give everyone the knowledge and tools to be more critical of technology?
Bio: Casey Fiesler is an assistant professor in the Department of Information Science (and Computer Science, by courtesy) at the University of Colorado Boulder. Armed with a PhD in Human-Centered Computing from Georgia Tech and a JD from Vanderbilt Law School, she primarily researches social computing, ethics, law, and fan communities (occasionally all at the same time).
When: Thursday, April 8, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: Mind-wandering typically has a bad rep. Most people think it would be best to be concentrated all the time, but in reality our minds wander roughly half of the time. But is that really bad? My research shows that mind-wandering can indeed be bad, especially when it turns into depressive rumination (and we can create depressed computers by simulating these processes!). Yet at the same time, mind-wandering is also critical for allowing us to plan ahead or develop creative ideas, processes that we are slowly bringing into laboratory experiments. Because measuring mind-wandering is a challenge. Another interesting question is whether mental training, for example through meditation, can change our mind-wandering. I will discuss how mindfulness may not necessarily reduce our mind-wandering, but may more likely change our attitude towards our thoughts which makes them less “sticky”.
Bio: Marieke van Vugt is an assistant professor in the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence at the University of Groningen. Her research aims to understand how, when, and why we mind-wander. She uses a multimodal approach that combines computational modeling, scalp and intracranial EEG, behavioral studies, and eye-tracking. In addition, she is interested in how meditation practice affects our cognitive system, and she investigates meditation in both Western practitioners and Tibetan monks. She became a member of the Young Academy of Groningen in 2017.
Thursday, April 1, 8 AM PST (11 AM EST, 17:00 Central European time)
Social capital is the resource that develops from the establishment and maintenance of social relationships. It’s an important resource for both individuals and groups, and is central to how a lot of effective organizations operate. For 20 years, scholars have studied how computer mediation and the design of online systems effects the processes by which social capital develops, and what that means for potential new types of capital. For example, weak social ties were often too expensive to maintain, but computer mediation disintermediated that cost making it easier to maintain large, diverse networks. Now, on top of changes to how we build relationships driven by technology, the global pandemic has radically changed the ways in which people communicate with one another and how they use technology to create new relationships and reinforce existing ones. Relationships of all strengths depend on attention, time spent, and “social grooming”. How do we engage in those activities in a time of mass virtualization? What will the future of our social capital look like?
Bio: Cliff is a professor in the School of Information. Previously, he spent six years as an assistant professor in the College of Communication Arts and Sciences at Michigan State University. He researches the social and technical structures of large scale technology mediated communication, working with sites like Facebook, Wikipedia, Slashdot and Everything2. He has also been involved in the creation of multiple social media and online community projects, usually designed to enable collective action. One of Cliff's core values is combining top quality research with community engagement.
Thursday, March 25, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract Remote and hybrid work is now more important than ever. For years, we have been exploring telepresence experiences that feel intuitive using spatial cues from real face-to-face meetings, such as gaze awareness and spatialized audio, to improve the experience for all the participants. We have also been working on making the workspace and shared task space on multiple form factors more effective using simultaneous pen and touch interaction techniques and context sensing. The goal of this discussion is to share some of the research we have done over the last two decades and explain how this work could have a chance to contribute to building the future workplace.
Bio: Michel Pahud has a Ph.D. in parallel computing from the Swiss Federal Institute of Technology. He won several prestigious awards including the Logitech prize for an innovative industrially-oriented multiprocessors hardware/software project. He joined Microsoft in 2000 to work on many different projects including innovation in telepresence/networking technologies and research in education. For more than a decade, he has been focusing on human-computer interaction at Microsoft Research. His research includes bimanual interaction, novel form-factors, context sensing, cross-device interactions, haptics, productivity in augmented and virtual reality. His work has been published in top tier HCI conferences and influenced product groups (i.e. simultaneous pen and touch support in Windows). Michel has personally demonstrated his research to executives at Microsoft including Bill Gates, Steve Ballmer and Satya Nadella. Over the years his research has been covered by the press such as BBC NEWS TECHNOLOGY, The Verge, The Washington Post, The Seattle Times, and many more.
Thursday, March 18, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: Social media has given rise to a new form of influence. It provides voices to those who have not had them in traditional media, allows people to find communities who care about their specific issues, removes gatekeepers, and facilitates reach. Becoming an influencer is now a type of work, whether it's being a content creator or taking your voice as an outside expert onto these platforms. But reaching the public comes with expectations, responsibilities, criticism, and often some kind of harassment. Learning to manage the interactions, critiques, and privacy invasions are as important as learning the skills of creating good posts and building community
Bio: Jen Golbeck is a Professor at the University of Maryland in College Park and is Director of the Social Intelligence Lab. She is an expert in social networks, social media, privacy, and security on the web. Jennifer Golbeck is known for her work on computational social network analysis. She developed methods for inferring information about relationships and people in social networks. Her models for computing trust between people in social networks are among the first in the field. Social trust was for used in early research on trust-based recommender systems. She was a program co-chair of ACM RecSys 2015. Golbeck has received attention for her work on computing personality traits and political preferences of individuals based on their social network profiles. Her presentation at TEDxMidatlantic, discussing the need for new methods of educating users about how to protect their personal data, was selected as one of TED's 2014 Year in Ideas talks. She presented at TEDxGeorgetown, about pets on the internet.
Thursday, March 11, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: In the advent of automated driving technology, trust in the automation systems will play a big role in the successful acceptance of self-driving cars. Apart from enabling a smooth user experience for the occupants, a crucial challenge is the safe interaction of automated vehicles with other road users. How can a successful interaction be accomplished, and how effective are eHMIs (External Human-Machine Interfaces) for automated vehicles? Are they even necessary? How do we even measure “effectiveness”? How is this interaction different for the different kinds of road users (pedestrians, cyclists, drivers, etc.)? The early stages of research on eHMI reveal several challenges and considerations that limit the possibilities from a design standpoint. Are there any “golden eggs” for this challenge? In this discussion, I would love to share our research on eHMIs, the insights we gained from them, and our vision and hopes of what this interaction may look like in the future.
Bio: Debargha (Dave) Dey is a Human Factors and Human-Computer Interaction (HCI) researcher currently affiliated with Eindhoven University of Technology in The Netherlands as a postdoctoral research fellow. His current research contributes to the domain of automated driving and traffic safety – looking specifically at highly- or fully-automated vehicles. He conducts empirical research and distills insights on human behavior to develop, test, and refine prototypes for interface systems. Apart from communication between automated vehicles with other road users, his research interests lie in the domain of trust in automation, take-over requests, Advanced Driver Assistance Systems (ADAS), Driver Safety, and Traffic Psychology. Prior to his current research, he was a UX researcher, collaborating with interdisciplinary teams to investigate interaction challenges and ambiguous concepts in software applications. He received a PhD cum laude in Human-Computer Interaction from the department of Industrial Design at Eindhoven University of Technology (TU/e) in 2020. Prior to that, he obtained a PDEng (Professional Doctorate in Engineering) in User-System Interaction from TU/e in 2017, and MS in Computer Science from Vanderbilt University in 2012.
Thursday, March 4, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: Sex education in the United States urgently needs improvement. This is especially the case for students whose sexual and romantic experiences and interests are different from the norm. As a result, transgender and gender expansive youth often go online to fill in the gaps left by their sex ed curriculum. Partnering with Seattle Children's Hospital Research Institute, we used this as motivation to design an online sex education resource for trans and gender expansive youth with trans and gender expansive youth as design partners. I will additionally tie this ongoing research to four tensions inherent to HCI research with marginalized people that I have previously identified—exploitaiton, membership, disclosure, and allyship. Through this lens on this specific project, I hope to speak to how we think about each tension and our experiences, successes, mistakes, and surprises.
Bio: Calvin Liang (he/they) is a PhD student in the department of Human Centered Design and Engineering at the University of Washington, co-advised by Dr. Sean Munson and Dr. Julie Kientz. He has a background in Human Factors Engineering from Tufts University. His research focuses on Queerness, Health, and Technology and examines how we might design technological solutions to support queer people with their health, while also recognizing that technological solutions are not always appropriate. Another side of their research explores how we might view our research contributions in HCI as forms of allyship, leveraging power and privilege when supporting the goals of marginalized people. Their work is rooted in values of inclusivity, equity, and justice.
Thursday, February 25, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: Many accounts of data science work are eerily devoid of humans. We have focused on the necessary and beneficial work that people do while they do the work of data science. We show five ways in which humans intervene between "the data" and "the model," and we then take a deeper dive into the social construction of "ground truth" for machine learning. This work is sometimes done by individuals, but is more frequently done among teams of data science workers. We explore some of the configurations and activities of groups of data science workers. These studies help us to address inherent weaknesses in the unpopulated view of data science: By repopulating our understanding of data work, we can help humans to make their necessary and beneficial contributions.
Bio: Michael Muller works as a research scientist at IBM Research AI in Cambridge MA USA. He is concerned with individual and collaborative human work in technological settings, and with social justice issues in human interactions with humans and with technologies. He co-led workshops about human work in data science, and is working with four co-authors on a book about human-centered data science. Michael co-proposed the CHI 2021 subcommittee on Critical and Sustainable Computing; he also serves on SIGCHI CARES. ACM recognized Michael as Distinguished Scientist. SIGCHI accepted Michael as a member of the SIGCHI Academy. IBM recognized Michael as a Master Inventor.
Thursday, February 18, 8 AM PST (11 AM EST, 17:00 Central European time)
BLURB: Thomas Edison stated that the only thing the body is good for is to move the brain from place to place. He was wrong. The brain is inextricably part of the body and thus, the state of any part of the body affects any other - including the brain. Thus, when the body is working well - is “healthy” - any other part works better, too - including the brain. Given that simple principle, that the brain is part of the body, we can begin to ask whether our designs are aligned to support our whole bodies? Sedenarism, obesity, chronic lack of sleep, workplace stress, even myopia, all suggest that they do not. A goal of inbodied interaction design is to help us explore how we can align our designs with our physiology and anatomy to support our wellbeing to enable our best creative, social and emotional performance. It offers a set of models, such as tuning, insourcing and discomfort design and approaches like “experiment in a box” to make the body’s awesome complexity accessible to designers to help #makeNormalBetter
Bio: m.c. holds a professorship in computer science and human performance in the UK at the university of Southampton where she leads the WellthLab - a human systems interaction group where the goal is to bring science, engineering and design together to help #makeNormalBetter for all @scale. m.c. is also a certified strength and conditioning specialist, functional neurologist and nutritionist. She enjoys helping folks, and women in particular, achieve their first pull up. Folks interested are invited to explore the Inbodied Interaction special issue in IX at http://interactions.acm.org/archive/section/march-april-2020/special-topic-inbodied-interaction and join us with your ideas /pictorals around insourcing designs to our chi2021 workshop https://wellthlab.ac.uk/inbodied4.
Thursday, February 11, 8 AM PST (11 AM EST, 17:00 Central European time)
Bio: Helena Mentis is an associate professor in the Department of Information Systems at the University of Maryland Baltimore County (UMBC), a faculty member in the Human Centered Computing graduate program, and the director of the Bodies in Motion Lab. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how collaboration and coordination are achieved and better supported, primarily with regards to information sharing and decision making in healthcare contexts. In turn, she develops interactive systems to investigate the effects of new mechanisms for collaboratively sensing, presenting, and interacting with information. For the past four years, she has been addressing this problem space in two fundamental streams of research: (1) imaging interaction in surgery and (2) patient empowerment.
Thursday, February 4, 8 AM PST (11 AM EST, 17:00 Central European time)
Abstract: Automated driving, with all its advantages and disadvantages, has been the subject of lively and controversial debate in recent years. Also in the context of this workshop series, a wide variety of aspects and possible fields of application have already been discussed - mainly with the focus on individual mobility. But when it comes to higher automation levels and with the public discussion for more sustainability in transport, it becomes more and more likely that we will (have to) give up individual mobility and turn to shared mobility. Shared Autonomous Vehicles (SAVs) will become an important supplement to public transport, in particular in off-peak times and suburban areas that could not otherwise be served economically. Besides, clock-face scheduling of transport services can be increased and demand-responsive transit can be introduced easily and at a reasonable cost. Just recently, unmanned aerial vehicles (UAMs), commonly termed drones, have been among the most intensely discussed emerging technologies that could expand mobility into the third dimension of low-level airspace. They offer boundless mobility and have the potential to become an iconic technology of the 21st century. SAVs and UAMs (together with L5 vehicles) can open up new target groups for mobility and thus rehabilitate and increase the independency and quality of life of many people who are currently excluded from passenger transportation. However, the future development of shared autonomous vehicles and drone technologies will be increasingly dependent on public acceptance, trust in the technology and and consideration of personal needs and expectations. In this conversation, I will give you insights into our research in this topic and discuss together with you possible solutions for the identified problems.
Bio: Andreas Riener is professor for HCI and VR at Technische Hochschule Ingolstadt (THI) with co-appointment at the CARISSMA Institute of Automated Driving (CIAD). He is head of the interdisciplinary "Human-Computer Interaction Group" at THI. Before moving to Germany, he was associate professor at Johannes Kepler University, Linz. His focus is hypotheses-driven experimental research in the area of driver and driving support systems at various levels (simulation, simulator studies, FOTs, NDSs). His research interests include driving ergonomics, driver state assessment from physiological measures and trust/acceptance/ethics in automated driving. One particular interest is in the methodological investigation of human factors for driver-vehicle interaction. Furthermore, he is working with AR/MR/VR technology and developing novel interaction concepts. Riener’s research has yielded more than 200 publications across various journals and conference proceedings in the broader field of HCI, AR/VR, human factors, and automated driving. He is holding IEEE, ACM, HFES Europe memberships. Andreas Riener is steering committee co-chair of ACM AutomotiveUI and chair of the German ACM SIGCHI chapter as well as member of the executive board of the German Computer Science Society (GI), HCI division.
Thursday, January 28, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Audrey Girouard
Abstract: Deformable and shape-changing devices offer users the ability to physically manipulate objects to interact with them. By combining flexible electronic technologies with human computer interaction, we can study how changing the form factor of digital devices can offer new interaction techniques to users. When wearable, devices can integrate on the body or on clothes to make this interaction more ubiquitous. These devices offer interaction opportunities in many domains, including in the workplace, and for many groups of users, including people with disabilities. In this conversation, I will discuss research on deformables and wearables conducted at Carleton University’s Creative Interactions Lab.
Bio: Audrey Girouard is an associate professor in the School of Information Technology at Carleton University. She teaches in the Interactive Multimedia and Design undergraduate program, in the master’s of Human Computer Interaction graduate program, and in the master and PhD in information technology, specialized in digital media graduate programs. Girouard's work pioneers novel interaction techniques with emerging user interfaces through software and hardware design, development and evaluation. Her research focuses on deformable user interactions, flexible displays, and bend gesture inputs. She has recently received a Technology Achievement Award from Partners in Research, an Early Researcher Award from the Ontario's Ministry of Research and Innovation, and a Research Achievement Award from Carleton University.
Thursday, January 21, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Jessica Cauchard
Abstract: A critical health challenge during epidemics and catastrophes such as the COVID-19 pandemic is providing healthcare and maintaining continuity of care. Drones can be of invaluable help, providing rapid and efficient access to health care and emergency services during a shortage of manpower or social distancing. They can deliver relief supplies (e.g., first aid, medications) while carrying cameras and other sensors to support remote medical care and decision-making. We have been witnessing a technological revolution in the COVID-19 crisis with drones being used – or misused – to monitor and inform populations. They offer tremendous further potential for public health; yet, their use is controversial due to risks to individuals’ safety and privacy, and because of this, it is unclear how drones will sustainably remain in use in human spaces, regardless of technological advances and benefits to society. In this conversation, we will discuss how drones can be fitted with interaction capabilities that can be used to leverage their acceptability in human spaces.
Bio: Dr. Jessica Cauchard is an assistant professor in the department of Industrial Engineering and Management at Ben Gurion University of the Negev in Israel, where she recently founded the Magic Lab. Her research is rooted in the fields of Human-Computer and Human-Robot Interaction with a focus on novel interaction techniques and ubiquitous computing. Previously, she was faculty of Computer Science at the Interdisciplinary Center Herzliya between 2017 and 2019. Before moving to Israel, Dr. Cauchard worked as a postdoctoral scholar at Stanford University. She has a strong interest in autonomous vehicles and intelligent devices and how they change our device ecology. She completed her PhD in Computer Science at the University of Bristol, UK in 2013 and received a Magic Grant for her work on interacting with drones by the Brown Institute for Media Innovation in 2015.
Thursday, December 10, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Saiph Savage
Abstract: The A.I. industry has created new jobs that have been essential to the real-world deployment of intelligent systems. These new jobs typically focus on labeling data for machine learning models or having workers complete tasks that A.I. alone cannot do. Human labor with A.I. has powered a futuristic reality where self-driving cars and voice assistants are now commonplace. However, the workers powering our A.I. industry are often invisible to consumers. Together, this has facilitated a reality where these invisible workers are often paid below minimum wage and have limited career growth opportunities. In this talk, I will present how we can design a future of work for empowering the invisible workers behind our A.I. I propose a framework that transforms invisible A.I. labor into opportunities for skill growth, hourly wage increase, and facilitates transitioning to new creative jobs that are unlikely to be automated in the future. Taking inspiration from social theories on solidarity and collective action, my framework introduces two new techniques for creating career ladders within invisible A.I. labor: a) Solidarity Blockers, computational methods that use solidarity to collectively organize workers to help each other to build new skills while completing invisible labor; and b) Entrepreneur Blocks, computational techniques that, inspired from collective action theory, guide invisible workers to create new creative solutions and startups in their communities. I will present case-studies showcasing how this framework can drive positive social change for the invisible workers in our A.I. industry. I will also connect how governments and civic organizations in Latin America and U.S. rural states can use the proposed framework to provide new and fair job opportunities. In contrast to prior research that focused primarily on improving A.I., this talk will empower you to create a future that has solidarity with the invisible workers in our A.I. industry.
Bio: Saiph Savage is the co-director of the Civic Innovation Lab at the National Autonomous University of Mexico (UNAM) and director of the HCI Lab at West Virginia University. Her research involves the areas of Crowdsourcing, Social Computing and Civic Technology. For her research, Saiph has been recognized as one of the 35 Innovators under 35 by the MIT Technology Review. Her work has been covered in the BBC, Deutsche Welle, and the New York Times. Saiph frequently publishes in top tier conferences, such as ACM CHI, AAAI ICWSM, the Web Conference, and ACM CSCW, where she has also won honorable mention awards. Saiph has received grants from the National Science Foundation, as well as funding from industry actors such as Google, Amazon, and Facebook Research. Saiph has opened the area of Human Computer Interaction in West Virginia University, and has advised Governments in Latin America to adopt Human Centered Design and Machine Learning to deliver smarter and more effective services to citizens. Saiph’s students have obtained fellowships and internships in both industry (e.g., Facebook Research, Twitch Research, and Microsoft Research) and academia (e.g., Oxford Internet Institute.) Saiph holds a bachelor's degree in Computer Engineering from UNAM, and a Ph.D. in Computer Science from the University of California, Santa Barbara. Dr.Savage has also been a Visiting Professor in the Human Computer Interaction Institute at Carnegie Mellon University (CMU).
Thursday, December 3, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Chris Janssen
Abstract: In this conversation, I will illustrate why future work and the study of the future of work research requires a multi- and inter-disciplinary perspective. I will draw upon three different experiences. First, in a recent review paper [1], I investigated research into human-automation interaction over the past fifty years and made some projections into the future. The review showed that automated technology is increasingly being applied in a wider set of contexts and being used by a wide(r) variety of humans. To create successful interaction therefore requires both an understanding of the domain of operation, of the technology, and of the human, and how those three relate. This requires a multi- and interdisciplinary approach. Second, I draw upon my experience in teaching on an interdisciplinary program in Artificial Intelligence (AI) in Utrecht. For this program, I led a team that revised part of the curriculum. A particular challenge that we faced was how to demonstrate the value of multi- and interdisciplinary work. Many students that enroll in AI programs nowadays are drawn by the engineering promise that technology can make our world a better place, and they are drawn to “hot” techniques such as deep learning. However, in Utrecht, we try to take a wider approach, by also providing insights from for example psychology, linguistics, and philosophy. How can students be convinced of the relevance of such “non-engineering” fields for AI? This will draw upon our insights that we also published recently [2].
Third, I will draw upon my personal experience, which has been in research at the intersection of HCI, AI, and Cognitive Science. The majority of my research has focused on human multitasking and distraction, particularly in automotive settings. I’ve had the pleasure of working on relevant problems with students and colleagues that come from different backgrounds. Being based in a psychology department, I see the value of such interdisciplinary work in that it can help demonstrate where concepts and theories from psychology can inform other domains, but also where other fields can show where existing theories break down. For example, recently we investigated [3] what is even meant by the words “model” and “simulation” in the context of human-automated vehicle interaction, and we found that across different fields, these words have very different meanings. How can we still benefit from each other’s insights? I look forward to the conversation and to your questions.
Citations (mostly open access; otherwise, pre-prints are available through my website)
[1] Janssen, C.P., Donker, S.F., Brumby, D.P., Kun, A.L. (2019) History and Future of Human-Automation Interaction. International Journal of Human-Computer Studies, 131, 99-107.
[2] Janssen, C.P., Nouwen, R., Overvliet. K. Adriaans, F., Stuit. S. Deoskar, T., and Harvey, B. (2020 accepted) Multidisciplinary and interdisciplinary teaching in the Utrecht AI program: Why and how?," IEEE Pervasive Computing
[3] Janssen, C.P., Boyle, L., Ju, W., Riener, A., and Alvarez, I.(2020) Agents, Environments, Scenarios: A Framework for Examining Models and Simulations of Human-Vehicle Interaction. Transportation Research: Interdisciplinary Perspectives, 8, article 100214
Bio: Chris Janssen is an assistant professor (tenured) at Utrecht University (The Netherlands) in experimental psychology. He obtained his PhD in Human-Computer Interaction from University College London (2012), and MSc in Human-Machine Interaction (2008) and BSc in Artificial Intelligence (2006) from The University of Groningen. Before joining Utrecht, Chris worked as a post-doctoral fellow at the Smith-Kettlewell Eye Research Institute in San Francisco. Chris’ research focuses on understanding adaptive human behavior and human-automation interaction. Of particular interest are human behavior in multitasking settings, driver distraction, and when interacting with automated systems such as autonomous cars. Chris is a member of the board of Utrecht’s Masters in Artificial Intelligence, and he leads Utrecht’s Special Interest Group in Social and Cognitive Modeling. He is an associate editor of the International Journal of Human-Computer Studies, and served as general chair of the 2019 ACM Auto-UI conference.
Thursday, November 19, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Katherine Isbister
Abstract: In this provocation-style talk, Isbister introduces the notion of 'Suprahuman' technologies--tools designed explicitly for the space between people, to better support social connection and dynamics. She will use examples from her lab's research-through-design practice to ground the concept, from social wearables to explorations in social virtual reality. Isbister originally introduced the concept of Suprahuman technologies at the Halfway to the Future symposium at Nottingham in 2019, and is excited to discuss this possibility space with participants in the Future of Work and Wellbeing conversation series.
Bio: Katherine Isbister is Professor of Computational Media at UCSC’s Engineering School. She directs the Social Emotional Technology Lab and the Center for Computational Experience. Her research team creates interactive experiences to heighten social and emotional connections and wellbeing, with over 100 peer-reviewed publications. Their research-through-design practice often includes elements of games and play. Industry support includes Intel, Google, Mozilla, and others, with federal support from NSF and NIH. Isbister was part of the Future of Work group at Stanford’s CASBS Center. She is a recipient of MIT Technology Review's Young Innovator Award, and is an ACM Distinguished Scientist.
Thursday, November 12, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Erin Solovey
Abstract: From elementary school math games to workplace training, computer-based learning applications are becoming more widespread. With these programs, it becomes increasingly possible to use the data generated, such as correct and incorrect problem-solving responses, to develop ways to test for student knowledge and to personalize instruction to student needs. The logs of student responses can capture answers, but they fail to capture critical information about what is happening during pauses between student interactions with the software. I will discuss my research in collaboration with the University of Pittsburgh and Lehigh University, in which we are exploring the use of brain signals alongside student log data to understand important mental activities during learning. With a better understanding of when and how learning occurs during pauses in learning system use, researchers and developers will be able to create adaptive interventions within learning and training systems that are better personalized to the needs of the individual. I will also discuss some of the challenges we face in bridging the fields of education research in complex, realistic learning environments and cognitive science research on learning. In particular, there is the frequent misalignment of the levels of analysis or grain sizes across the two fields. Cognitive science, including cognitive neuroscience research involving recordings of brain activity, traditionally requires paradigms with highly constrained stimuli, timing, and task requirements. HCI research in complex real-world environments rarely align with these paradigms. In our work, we work to develop methodologies for integrating them using brain data, strengthening the connection between cognitive research and educational research. The use of machine learning techniques on our growing data corpus will further enable new insights to be drawn, and these insights will be used to improve the design of learning environments more broadly.
Bio: Dr. Solovey is an Assistant Professor of Computer Science at Worcester Polytechnic Institute. Her research expertise is in human-computer interaction, with a focus on accessibility and emerging interaction techniques, such as brain-computer interfaces. Her work has applications in areas including STEM education, Deaf education, health, driving, aviation, gaming, complex decision making, as well as human interaction with autonomous systems and vehicles. Solovey is committed to improving STEM education and broadening participation in computing.
Thursday, November 5, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Bastian Pfleging
Abstract: In the last century, the car has massively increased people’s mobility. Today, most of us spend a considerable amount of time in a car, commuting to work or for leisure, business, and vacation purposes. During the last decades, quite a few innovations made it into the car, offering new features (e.g. infotainment and connectivity), improving driving safety and assisting the driver, or improving driving comfort. At the same time, we see that many things did not really change, especially with regard to the required driving tasks, allowed and available activities, and the overall interior layout.
Besides increased driving safety, one expected advantage of automated driving is that all occupants (including the driver) can use the car as a new space for non-driving-related activities. This can include activities related to work, play, and relaxation. For many of these activities we expect them to be supported through some form of technology. It is expected that the opportunity to perform these activities will be a major selling point, and it is of interest to explore the opportunities and understand how to redesign the interior of such cars, especially from a technological and passenger-vehicle interaction perspective. In our research, we look at how users can interact with their automated car of the future and how it interacts with its environment. Which non-driving-activities and how will we perform them in such cars? What changes, if we switch to mobility as a service? How can we design and evaluate novel concepts? In this conversation, I plan to explore these questions about designing the experience and interactive environment of automated vehicles.
Bio: Bastian Pfleging is Assistant Professor for Design Research on Systems, Products, and Related Services for Future Everyday Mobility at Eindhoven University of Technology. With a background in computer science, his expertise is in the fields of human-computer interaction, ubiquitous systems, multimodal interaction, natural user interfaces, and specifically automotive user interfaces. His research interests especially include novel concepts for non-driving-related activities in the car and the user experience of vehicles in the transition to full automation. Bastian Pfleging is steering committee member of the AutomotiveUI conference series and served the scientific community in different chair roles at various conferences such as CHI ‘19 (associate chair, subcommittee on user experience and usability), PerDis’19 (demo chair), Mensch und Computer ’19 (demo chair), UIST ’18 (treasurer), AutomotiveUI ’17 (program chair), AutomotiveUI ’15 and 16’ (work-in-progress and interactive demo chair). He is also member in program committees of HCI-related scientific conferences and serves as a reviewer and guest editor for various conferences, journals, and magazines.
Thursday, October 22, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Birsen Donmez
Abstract: Driving has been transformed in recent years, but how have these technological advances impacted our safety? Today, vehicles are capable of detecting/reacting to hazards, maintaining speed and the correct distance from surrounding vehicles, and assuming some or even all aspects of vehicle control. We have also seen the rise of sensor, wireless communication, and computing technology enabling the conduct of a variety of non-driving activities by the drivers. While these innovations enhance the driving experience in many ways and can support aspects of work and wellbeing for the driver, certain implementations of technology create concerns around suboptimal monitoring of driving automation, inappropriate disengagement from driving, and lack of fitness to resume vehicle control.
In this conversation, I will share my views on driving automation technology regarding safety, work, and our wellbeing. I will also talk about our research that aims to improve driver coordination with state-of-the art driving automation technology – arguably one of the more dangerous vehicle technologies. In particular, I will cover our research on drivers’ lack of understanding of higher levels of driving automation, driver state monitoring, and supporting anticipatory driving in an automated vehicle.
Bio: Birsen Donmez is a professor at the University of Toronto, Department of Mechanical & Industrial Engineering, and is the Canada Research Chair in Human Factors and Transportation. She received her BS in Mechanical Engineering from Bogazici University in 2001, her MS (2004) and PhD (2007) in industrial engineering, and her MS in statistics (2007) from the University of Iowa. Professor Donmez’s research interests are centered on understanding and improving human behavior and performance in multi-task and complex situations, using a wide range of analytical techniques. In particular, her research focuses on operator attention, decision support under uncertainty, and human automation interaction, with applications primarily in surface transportation and healthcare.
Thursday, October 15, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Duncan Brumby
Abstract: We are all utterly overwhelmed by the volume of email, and other forms of digital communications, that we receive. Every. Single. Day. It’s truly exhausting. To better understand how we prioritize unread emails, I will talk about a field experiment that we conducted at UCL. In this experiment we sent people a lot of email: 360 messages over 3 weeks. And then we paid them to respond to it (if they could keep up). Like in the real-world, not every email was the same. Some paid out more $$$s for a response, some demanded a rapid response, while some were easier to respond to. So, which would you prioritize? The most important, the most urgent, or the easiest? How might these features interact with one another to affect responses? I’ll tell you what we found during the talk.
Bio: Duncan Brumby is Professor of Human-Computer Interaction (HCI). He directs the HCI MSc program at UCL and leads a research group focused on investigating how people manage digital distractions. He is Editor-in-Chief of the International Journal of Human-Computer Studies. He has previously held appointments at Georgia Tech, Drexel University, Microsoft Research, and PARC.
Thursday, October 1, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Susanne Boll
Abstract: Experiences from teaching a hardware-oriented course remotely Imagine you plan to teach a course with electronics and tinkering, with making and crafting in the lab, with students in teams – and then you must run it remotely while keeping physical distance. Our Makers’ Lab course suddenly had to be transformed into a remote course due to Covid-19. In this discussion, i can share our experiences on how we ran a practical hardware-oriented course – over a distance. We created a task that could have been done remotely, bought hardware, shipped material to the student and crafted and tinkered together - over a distance. The students delivered excellent results - but these also came at a price. While we were deeply impressed with the students’ performance, it became increasingly clear that this is also a result of the amount of effort put into this project. With all work being shifted to the students’ homes and no distinction of place or time between university and daily life, students often worked far beyond what was common in prior years. Any future remote practical course must be careful to clearly communicate expectations and at what point they are fulfilled, even more than in for an in-person class. We will, in the future, investigate sharing strategies for managing work- life balance in working from home with the students.
Bio: Prof. Dr. Susanne Boll is Professor of Media Informatics and Multimedia Systems in the Department of Computing Science at the University of Oldenburg, in Germany. She serves on the executive board of the OFFIS Institute for Information Technology, in Oldenburg, where she heads many national and international research projects in multimedia information retrieval and intelligent user interfaces. Prof. Dr. Boll also serves as the scientific head of the Human-Machine Interaction technology cluster at OFFIS.
Thursday, September 24, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker:Andrea G. Parker
Abstract: In the United States, there are serious and persistent disparities in health outcomes, with low-socioeconomic status (low-SES), racial and ethnic minority, and older adult populations disproportionately experiencing poor health outcomes. These inequities are due in large part to the social determinants of health—social, cultural, economic, and societal conditions that can make it more challenging to achieve wellness.
Disruptive innovations are sorely needed to reduce health disparities. Information and communication technologies (ICTs), with their growing ubiquity and ability to provide engaging, informative, and empowering experiences for people, present exciting opportunities for health equity research.
In this talk, I will overview a set of case studies demonstrating work the Wellness Technology Lab has done to design, build, and evaluate how novel interactive computing experiences can address issues of health equity. These case studies investigate how social, mobile, and civic technology can help vulnerable and marginalized communities to both cope with barriers to wellness and address these barriers directly. I will conclude with opportunities and challenges for community wellness informatics—research that explores how ICTs can empower collectives to collaboratively pursue health and wellness goals.
Bio: Andrea Parker is an Associate Professor in the School of Interactive Computing at Georgia Tech, and an Adjunct Associate Professor in the Department of Behavioral Sciences and Health Education, within the Rollins School of Public Health at Emory University. Her research contributes to the fields of Human-Computer Interaction (HCI), Computer Supported Cooperative Work (CSCW), and Health Informatics. She designs and evaluates the impact of software tools that help people manage their health and wellness. Herrch specifically focuses on health equity. She studies racial, ethnic and economic health disparities and the social context of health management. She takes an ecological approach to technology design, whereby she conducts in-depth fieldwork to examine the intrapersonal, social, cultural, and environmental factors that influence a person's ability and desire to make healthy decisions--and how technology can support wellness in this context.
Thursday, September 17, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Linda Ng Boyle
Abstract: Advances in automation are getting us ever closer to self-driving vehicles. Of course these vehicles will not be fully autonomous - drivers will control them much of the time. But some of the time it will be the automation that is in control. During these times drivers will be able to engage in non-driving activities related to work and wellbeing. This opportunity brings up a number of important research questions, including: Which non-driving tasks are appropriate for automated vehicles? How are the opportunities to engage in these activities presented to the driver? Which interaction methods should be used for completing these tasks? And, how can drivers safely switch back to driving when the automation needs them to take back control? These are the questions we plan to explore In this conversation about the future of work and wellbeing.
Bio: Linda Ng Boyle is Professor and Chair of the Industrial & Systems Engineering Department at the University of Washington, Seattle. She has a joint appointment in Civil & Environmental Engineering. She has degrees from the University of Buffalo (BS) and University of Washington (MS, PhD). She is an organizer for the International Symposium on Human Factors in Driving Assessment and co-author of the textbook, “Designing for People: An Introduction to Human Factors Engineering”.
Thursday, September 10, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Caitlin Mills
Abstract: Mind wandering – often defined as off-task thought – is a central to the human experience and occupies up to 50% of our waking lives. On one hand, this could be problematic: when our minds wander, we risk missing critical information that can help build or strengthen our current mental model of a concept. On the other hand, it may offer critical opportunities for insights and creative thinking. In this talk, I will shed some light on the potential consequences and benefits of mind wandering, as well as promising efforts to detect and respond to mind wandering in real-time.
Bio: Caitlin Mills is an Assistant Professor in the Psychology Department at the University of New Hampshire. She received her Ph.D. from the University of Notre Dame then completed postdoctoral training at the University of British Columbia. Her research interests are at the intersection of psychology, cognitive neuroscience, computer science. A particular focus is on mind wandering: how to automatically detect it in everyday life settings, its relationship to affect, and impact on learning. Other interests include investigating affective states like boredom and confusion during complex learning and reading.
Thursday, September 3, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Dr. Amon Millner
Abstract: As a professor at a college committed to revolutionizing engineering education (Olin College) and a co-creator of a programming language designed to make computing as accessible as possible (Scratch), the mantra "multiple paths for multiple learners" has been a core thread through my work. I will speak about how applying that theme through my HCI work has both helped and hampered efforts to increase the wellbeing young people I work with who will be entering STEM workplaces in the future.
Bio: Dr. Amon Millner is an Associate Professor of Computing and Innovation directing the Extending Access to STEM Empowerment (EASE) Lab. His research and teaching is informed by his work in the Human-Computer Interaction (HCI) domain, drawing heavily from his specialization: developing tangible interactive systems for making and learning. He develops technology and community platforms to facilitate learners becoming empowered to make and make a difference in their neighborhoods.
Thursday, July 16, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: David A. Shamma
The year 2020 threw many of us in the deep end through shelter in place and work from home; however, the practice of working from elsewhere was already on the rise. While remote collaboration research has been around for decades, we have seen several recent advancements in both artificial intelligence and media systems. These advancements change the methods we use to establish contact, make connections in shared spaces, and rethink what makes our work environment. From how does a VR world’s architecture enhances or hinders social behaviors to can a remote user establish gaze-presence connected to a 360° conference room, we need to rethink how AI can enhance these experiences and assist people in their everyday work tasks and collaborations. We will discuss how human-centered AI advances XR and the Future of Work in industry research through real-world examples.
Dr. David Ayman Shamma is a distinguished industry scientist and director of research. He has worked on Edge AI and future of work at FX Palo Alto Laboratory, AI sensors for wearable fashion at Centrum Wiskunde & Informatica (CWI), HCI + AI research as a director at Yahoo Labs/Flickr, and UXR for remote knowledge sharing at NASA’s Center for Mars Exploration. He is a Distinguished Member of the Association for Computing Machinery (ACM) and a VP on the ACM SIGCHI Executive Committee. Ayman holds a Ph.D. in Computer Science from Northwestern University. His work has attracted international media attention, including Wired, The New York Times, and the Library of Congress.
Thursday, July 9, 8 AM PST (11 AM EST, 17:00 Central European time)
Speakers: Divy Thakkar, Neha Kumar, and Nithya Sambasivan
The future of work is speculated to undergo profound change with increased automation. Predictable jobs are projected to face high susceptibility to technological developments. Many economies in Global South are built around outsourcing and manual labour, facing a risk of job insecurity. In this paper, we examine the perceptions and practices around automated futures of work among a population that is highly vulnerable to algorithms and robots entering rule-based and manual domains: vocational technicians. We present results from participatory action research with 38 vocational technician students of low socio-economic status in Bangalore, India. Our findings show that technicians were unfamiliar with the growth of automation, but upon learning about it, articulated an emic vision for a future of work in-line with their value systems. Participants felt excluded by current technological platforms for skilling and job-seeking. We present opportunities for technology industry and policy makers to build a future of work for vulnerable communities
Divy works at Google Research India and pursues Research in HCI for Development with a specific interest in examining the intersection of HCI and AI in domains such as Education, Work and AI for Social Good Applications.
Neha Kumar is an Assistant Professor at Georgia Tech, with a joint appointment in the Sam Nunn School of International Affairs and the School of Interactive Computing. Her work lies at the intersection of human-centered computing and global development. She was trained as a computer scientist, designer, and ethnographer at UC Berkeley and Stanford University, and thrive in spaces where she can wear these three hats at once. Her research engages feminist perspectives and assets-based approaches towards designing technologies for/with underserved communities.
Nithya Sambasivan is a Staff Researcher at PAIR and lead for the HCI-AI group at Google Research India. Her research focuses on equitable human-AI interaction among marginalized communities. Sambasivan is an affiliate faculty at the Paul G. Allen Center for CS & Engineering at the University of Washington. She publishes in the areas of HCI, ICTD, and Privacy/Security. Sambasivan graduated with a Ph.D. from University of California, Irvine and an MS from Georgia Tech, focusing on HCI and under-represented communities. She has done stints at Microsoft Research India, IBM T J Watson, and Nokia Research Tampere.
Thursday, July 2, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Dr. Joseph Gabbard
As we see augmented reality (AR) applications move from research labs and into the Future of Work, the need for high-quality AR user experiences (UX) will be essential. Despite the fact that AR technology fundamentally changes the way we visualize, use, and interact with computer-based information, only a modest amount of human-computer interaction (HCI) work, especially UX design and rigorous UX assessments of AR interfaces has been done. Encouragingly, traditional HCI methods can be applied to determine what information should be presented to the user. However, these approaches do not tell us, and what has to date has not been adequately explored, is how best to present and interact AR information. In this talk, I discuss the nascent design opportunities and challenges afforded by AR technologies. As appropriate, I will present to bear research projects from my career that illustrate the challenges associated with designing and evaluating effective AR interfaces and experiences; using examples from several application domains and systems. Time permitting I will conclude with an argument that traditional HCI measures of effectiveness, such as time-on-task and errors alone, are insufficient in understanding the impact of AR on human performance.
Dr. Joseph L. Gabbard is director of the COGnitive Engineering for Novel Technologies (COGENT) Lab and Associate Professor of Human Factors at Virginia Tech’s Grado Department of Industrial and Systems Engineering. Dr. Gabbard is also an executive committee member of Virginia Tech’s Center for Human-Computer Interaction; one of the largest, oldest and most diverse centers focused on HCI in the US. Dr. Gabbard received his PhD and MS in computer science from Virginia Tech; both his Masters thesis and Doctoral dissertation focused on usability of VR and AR systems respectively. Dr. Gabbard’s research focuses on the connections between user interface design and human performance; and specifically the development of techniques to design and evaluate novel AR user interfaces Gabbard has been a pioneer in usability engineering with respect to applying to, and creating methods for, new interactive systems for more than 20 years. With funding from a variety of sources, he has developed several innovative methods for performing designing complex interactive systems and assessing their usability and impact on human performance, and disseminated this work in over 100 publications.
Speaker: Neha Kumar
Panelists: Sachin Pendse (Georgia Tech), Aditya Vishwanath (Stanford University), Alberta Ansah (University of New Hampshire), Tolulope Oshinowo (Olin College), Diana Tosca (Wellesley College), Angel Cooper (Wellesley College), Julia Burmeister (Wellesley College)
HCI researchers and practitioners are part of interdisciplinary teams creating tools for the future of work and wellbeing. In this panel discussion students and young researchers will explore how technology should be designed to support inclusion, diversity and equity at work. Some of the questions the panel will cover include where current technology solutions succeed, and where they fail, to support inclusion, diversity and equity at work; how actively supporting inclusion, diversity and equity will influence work productivity and creativity, as well as overall wellbeing; and what students, faculty, practitioners, and consumers can do to bring about a better future of work and wellbeing.
Neha Kumar is an Assistant Professor at Georgia Tech, with a joint appointment in the Sam Nunn School of International Affairs and the School of Interactive Computing. Her work lies at the intersection of human-centered computing and global development. She was trained as a computer scientist, designer, and ethnographer at UC Berkeley and Stanford University, and thrive in spaces where she can wear these three hats at once. Her research engages feminist perspectives and assets-based approaches towards designing technologies for/with underserved communities.
Tuesday, June 23, 3 PM PST (6 PM EST, 24:00 Central European time)
Panelists: Mashhuda Glencross, Geraldine Fitzpatrick, and Jon Whittle (with moderator Aaron Quigley)
The distribution of the human species across the globe means that today there are people living in every single time zone on Earth. Some of these zones are sparsely populated due to their geographic location within large open ocean areas. Other time zones are densely populated. All areas need the demarcation of time zones to accommodate national and international priorities. Being in the same time zone as an economically powerful economy can have significant advantages for a developing economy. At the same time, governments whose countries span multiple time zones need to think carefully about the associated costs of not having your entire population within a single time zone. This gives rise to unusual choices for people living on the edge who may have extreme sunrise and sunset times – these decisions are needed simply for economic or political reasons. Nonetheless our distribution can be a source of strength and resilience for our species. People in certain geographies with skills and experiences can find themselves acting as social and economic bridges between different regions of the world. Entire businesses operate in certain geographic and temporal regions simply to service the 24 - 7 needs of the consumer globally.
In this talk we meet with a set of academics from the future. 17 hours from the future to be precise if you are living in San Francisco. Or 14 hours in the future if you are in New York or eight hours in the future if you’re in Germany. In this conversation we will discuss the future of work where our global distribution across economically developed and developing economies can be harnessed for our mutual benefit and what the implications are for the future of Human-Computer Interaction and the future of work.
Professor Aaron Quigley is a general co-chair for the ACM CHI Conference on Human Factors in Computing Systems in Yokohama Japan in 2021. Aaron is the incoming Head of School for UNSW’s Computer Science and Engineering in Sydney Australia. In 2011 Aaron co-founded SACHI, the St Andrews Computer Human Interaction research group and served as its director until 2018. In his volunteer roles, he is currently a member of the ACM SIGCHI CHI steering committee, member of the ACM Europe Council Conferences Working Group, and an ACM Distinguished Speaker.
Dr. Mashhuda Glencross is a Senior Lecturer in the University of Queensland's Co-Innovation Group and a member of the Centre for Energy Data Innovation. Prior to joining UQ, she set up two UK-based technology startups; Pismo Software a research and development consultancy, and Switch That an IoT startup. She has worked as a lecturer at Leeds and Loughborough Universities, a product manager in the Media Processing Division at ARM in Cambridge and as a postdoctoral research fellow at The University of Manchester. Her research areas have included creating effective shared virtual environments, exploiting human perception to create the illusion of high-quality graphics, tactile interfaces, 3D reconstruction from photographs, material appearance modelling, visualisation, physically based simulation, IoT and cyber security.She serve the computer graphics community as an elected director at large at ACM SIGGRAPH and am a member of the SIGGRAPH Asia Conference Advisory Group. She is also a member of the steering committee of the ACM PACM journals, an associate editor of the Computers & Graphics journal and a senior member of the ACM.
Professor Jon Whittle is the Dean of the Faculty of IT at Monash University. He is a world-renowned expert in software engineering and human-computer interaction (HCI), with a particular interest in IT for social good. In software engineering, his work has focused around model-driven development (MDD) and, in particular, studies on industrial adoption of MDD. In HCI, he is interested in ways to develop software systems that embed social values. Before joining Monash, Jon was Head of the School of Computing and Communications at Lancaster University, where he led eight multi-institution, multi-disciplinary research projects. These projects focused on the role of IT in society, and included digital health projects, sustainability projects and projects in digital civics. When in the UK, he was a Royal Society Wolfson Merit Awardee - this is a prestigious award given to outstanding and respected scientists in the UK.
Geraldine Fitzpatrick is Professor of Technology Design and Assessment and heads the Human Computer Interaction Group in the Informatics Faculty at TU Wien Austria. She is an ACM Distinguished Scientist, ACM Distinguished Speaker and IFIP TC-13 Pioneer Award recipient. She has a diverse background, with degrees in both Computer Science and Applied Positive Psychology/Coaching Psychology, experience working in industry as a UX consultant, and a prior background as a nurse/midwife. In all her work she takes a concern for people-led perspectives, quality of experience and developing potential. Her research is at the intersection of social and computer sciences, with a particular interest in collaboration, health and well-being, and community building. Her most recent peer service roles include general co-chair for CHI2019, papers co-chair for CSCW2018 and various international advisory boards. She also hosts the Changing Academic Life podcast series.
Thursday, June 18, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Raffaella Sadun
Though it started as a health shock, Covid-19 is now creating havoc in economies around the world. I will discuss the early evidence on the impact of Covid-19 on workers and firms, as well as the possible role that technology may play to help workers and firms recover from the crisis in the medium-long. The talk will focus on two specific topics. First, how can digital education programs help low-skilled workers overcome the current crisis, and what are the challenges in implementing these programs today? Second, what should small and medium firms do to attenuate the impact of the crisis, and what is the role played by ICT investments in this phase? I will base my talk on my past and current economics research in this area, as well my recent experience advising the Italian government in a Covid-19 socio-economic task force.
Raffaella Sadun is a Professor at Harvard Business School. Professor Sadun's research focuses on the economics of productivity, management and organizational change. Her research documents the economic and cultural determinants of managerial choices, as well as their implications for organizational performance in both the private and public sector.
Thursday, June 11, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Ed Doran, Microsoft Research
We are excited to continue our interview series, this week with Ed Doran Ph.D. from Microsoft Research (MSR). Ed leads the product management team for MSR’s new incubations. We’ll be talking about how Microsoft and MSR thinks of developing new products or businesses and how academic programs are crucial to creating the future of work. We’ll also explore how Assistants and Artificial Intelligence might shape the future of work and the smart home. Finally, we’ll talk a bit about how Assistants and AI move out of the home and work, and into our lives on the go. As always, we’ll have plenty of time for questions and answers with the audience. We will wrap up with a 30-minute social after the conversation.
My focus is helping great insights and ideas evolve into compelling products.I started out as a scientist, trained as a researcher, and ultimately fell in love with applying those skills to creating new products and businesses. I joined MSR from the Cortana team where I co-founded the product, lead product planning, and lead some targeted AI innovation and ecosystem projects (e.g. ambient "smart" home, connected and intelligent cars, modern productivity in the enterprise, new intelligent devices). Prior to Microsoft, I lead insight strategy teams for Yahoo!, helping build new products and better businesses across search, browse, rich media, and ecommerce. And even before that, I lead market research, user research, business intelligence, and management consulting teams focused on applying research to make the organization smarter and better (e.g., new product design, pricing and distribution strategy, brand and marketing). Farthest back in the reaches of time, I lead various scientific projects including molecular biology & recombinant DNA, complex environmental sampling, new hardware innovations, and statistical modelling. Finally, I hold a Ph.D. and a deep love of coffee.
Thursday, June 4, 8 AM PST (11 AM EST, 17:00 Central European time)
Speakers: Dr. Maartje de Graaf, Dr. Wendy Ju, and Dr. Holly Yanco (with Dr. Christian Janssen as moderator).
Automation and robotics has made tremendous progress over the last few decades.1 Automated technology is no longer just for trained experts in constrained environments like factories, but also used by non-experts in their homes, offices, and during their commute. A recent article in the BBC suggests that the current Covid-19 pandemic might even increase the application of robots.2 They might for example help with cleaning high-risk areas, or help to maintain a bit more social distance in restaurants while still receiving table service.
Is this truly what the future holds for us? How will we interact with these robots? What do people expect of them, how do they react to them? And does this change if the robot violates our expectations? In this panel, we will discuss these questions and more with a panel of experts on human-robot interaction.
Maartje de Graaf is an assistant professor at Utrecht University (The Netherlands) in Information and Computing Sciences. She has a Bachelor of Business Administration in Communication Management (2005), a Master of Science in Media Communication (2011), and a PhD in Communication Science and Human-Robot Interaction (2015). Before starting at Utrecht University, she was a postdoctoral researcher affiliated with the Department of Communication Science at University of Twente (2015-2016) and later with the Department of Cognitive, Linguistic, and Psychological Sciences at Brown University (2017-2018).
Her research focuses on peoples social, emotional, and cognitive responses to robots aiming for the development of socially acceptable robots that benefit society. She is an Associate Editor of Transactions on Human-Robot Interaction, At Large member of the HRI Steering Committee, part of the Organizing Committee of HRI 2020, and served as social science expert at the IEEE Standards Association. She has been awarded as HRI Pioneers in 2014, 25 Women in Robotics (by robohub.org in 2017), and Inspiring Fifty Netherlands (by Inspiring Fifty in 2019).
Wendy Ju is an Assistant Professor of Information Science in the Jacobs Technion-Cornell Institute at Cornell Tech in New York City. She has a PhD in Mechanical Engineering, Design from Stanford University, and a MS in Media Arts and Technology from the MIT Media Lab. Before joining Cornell, Wendy was executive director at Stanford’s Center for Design Research.
Wendy’s research focuses on the design of human machine interactions, particularly with automation. A signature feature of her work is the development of novel experimental instruments to reveal how people will behave in a variety of future scenarios. Her monograph, The Design of Implicit Interactions, is published by Morgan Claypool.
Dr. Holly Yanco is a Distinguished University Professor, Professor of Computer Science, and Director of the New England Robotics Validation and Experimentation (NERVE) Center at the University of Massachusetts Lowell. Her research interests include human-robot interaction, evaluation metrics and methods for robot systems, and the use of robots in K-12 education to broaden participation in computer science. Application domains for her research include assistive technology, urban search and rescue, manufacturing, and exoskeletons. Yanco's research has been funded by NSF, including a CAREER Award, the Advanced Robotics for Manufacturing (ARM) Institute, ARO, CCDC-SC, DARPA, DOE-EM, ONR, NASA, NIST, Google, Microsoft, and Verizon.
Yanco is a member of the Computing Research Association (CRA) Computing Community Consortium (CCC) Council and is Co-Chair of the Massachusetts Technology Leadership Council’s Robotics Cluster. She served as Co-Chair of the Steering Committee for the ACM/IEEE International Conference on Human-Robot Interaction from 2013-2016, and was a member of the Executive Council of the Association for the Advancement of Artificial Intelligence (AAAI) from 2006-2009. Yanco has a PhD and MS in Computer Science from the Massachusetts Institute of Technology and a BA in Computer Science and Philosophy from Wellesley College.
Chris Janssen is an assistant professor (tenured) at Utrecht University (The Netherlands) in experimental psychology. He obtained his PhD in Human-Computer Interaction from University College London (2012), and MSc in Human-Machine Interaction (2008) and BSc in Artificial Intelligence (2006) from The University of Groningen. Before joining Utrecht, Chris worked as a post-doctoral fellow at the Smith-Kettlewell Eye Research Institute in San Francisco.
Chris’ research focuses on understanding adaptive human behaviour and human-automation interaction. Of particular interest are human behavior in mulitasking settings, driver distraction, and when interacting with automated system such as autonomous cars. Chris is a member of the board of Utrecht’s Masters in Artificial Intelligence, and he leads Utrecht’s Special Interest Group in Social and Cognitive Modeling. He is an associate editor of the International Journal of Human-Computer Studies, and served as general chair of the 2019 ACM Auto-UI conference.
Thursday, May 28, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Gloria Mark
Most of us spend our days among two different environments: an offline physical world but also an online digital world. In current times, many people are spending more time with digital media than they had been accustomed to. In this talk I will discuss empirical research in how people use digital media in their everyday lives. Our research has identified that people have short attention spans and are highly distractible when on devices. I will first talk about the role of cognitive processes in using digital media and how mental resources are taxed, for example when people switch rapidly among multiple tasks or try to inhibit distractions. Attention is goal-directed, and maintaining goals is especially hard when working with digital media where there are competing demands on attention. Attentional states can also vary over the day depending on our moods, tasks, and context. There are also individual differences and physiological effects in maintaining focus while on digital media. Task switching can be viewed through different lenses which can be leveraged to present solutions to increase focus. More broadly, I claim that placing the burden on individuals be self-disciplined to focus is the wrong approach. I will discuss other strategies, including how technology might support people in being more focused and productive with digital media.
Gloria Mark is a Professor in the Department of Informatics, University of California, Irvine. Her research focuses on studying how the use of digital technology impacts our lives in real-world contexts. Her goal is to use these insights to promote positive experiences for information technology use to increase health and well-being. She received her PhD in Psychology from Columbia University. Prior to UCI she worked at the German National Research Center for Information Technology (GMD, now Fraunhofer Institute) and has been an ongoing visiting researcher at Microsoft Research since 2012, and also had been a visiting researcher at IBM, National University of Singapore, and the MIT Media Lab. She was inducted into the ACM SIGCHI Academy in 2017 and has been a Fulbright scholar. Her work has appeared in the top conferences and journals in the field of Human-Computer Interaction and she has won multiple paper awards. She was the general co-chair for the ACM CHI 2017 conference, and is on the editorial boards of the ACM TOCHI and Human-Computer Interaction journals. Her work has appeared in the popular press such as The New York Times, The Atlantic, the BBC, NPR, Time and The Wall Street Journal. She was invited to present her work at the Aspen Ideas Festival and has presented at SXSW conferences.
Thursday, May 21, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Albrecht Schmidt
In the last century, changes to workplace health and safety have transformed blue-collar work in many countries. Dangerous jobs and hazardous environments, very common 100 years ago, have been systematically improved. In office and knowledge work, we see a reverse trend. Many jobs require sitting for long hours, breaks are not mandated, communication and interaction is high- paced, and performance monitoring (by the employees themselves or their managers) is ubiquitous. Software and user interface design for office applications and digital communication systems are in many ways the core of the problem rather than the solution. Calendar tools, micro-tasking, gamification, performance measures, instant messaging, multi-conferencing, as well as social media integration are topics widely researched in the HCI community, that look at optimizing output at work, very often evaluated in a one- or two- hour session. In the long term, many of the apparent solutions are deteriorative for people’s physical and mental health. In our research, we look at three areas to design healthier work environments: increasing physical activity, promoting human-to- human communication, and making achievements tangible. For more details, see our recent article: Technologies for Healthy Work in the ACM interactions magazine.
Albrecht Schmidt is professor for Human-Centered Ubiquitous Media in the computer science department of the Ludwig-Maximilians-Universität München in Germany. He studied computer science in Ulm and Manchester and received a PhD from Lancaster University, UK, in 2003. In his research, he investigates the inherent complexity of human-computer interaction in ubiquitous computing environments, particularly in view of increasing computer intelligence and system autonomy. Albrecht has actively contributed to the scientific discourse in human-computer interaction through the development, deployment, and study of functional prototypes of interactive systems and interface technologies in different real world domains. In his early work, he proposed the concept of implicit human-computer interaction. Over the years, he worked on automotive user interfaces, tangible interaction, interactive public display systems, interaction with large high- resolution screens, and physiological interfaces. Most recently, he focuses on how information technology can provide cognitive and perceptual support to amplify the human mind. To investigate this further, he received an ERC grant in 2016. Albrecht has co-chaired several SIGCHI conferences; he is in the editorial board of ACM TOCHI and edits a forum in ACM interactions. The ACM conferences on tangible and embedded interaction in 2007 and on automotive user interfaces in 2010 were co-founded by him. In 2018, Albrecht was induced into the ACM SIGCH Academy.
Thursday, May 14, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Regan Mandryk
Digital gaming has been shown to provide cognitive, emotional, and social benefits, but can also lead to problematic play and harm depending on the game, the player, and the context of play. My overarching research goal is to model the complex relationships between gaming and its effects and harness these relationships to design games and gaming interfaces that improve the wellbeing of players. Together with my students, I focus on connecting people through play, developing game-based biomarkers to assess wellbeing (with a focus on anxiety and depression), understanding how games help us recover from stress, and inventing new ways of understanding player experience. Digital gaming provides many benefits to players, and I will talk about how games can be used in this time of self-isolation to connect, motivate, entertain, and support us.
Regan Mandryk is a professor in Computer Science at the University of Saskatchewan; she pioneered the area of physiological evaluation for computer games in her award-winning Ph.D. research at Simon Fraser University with support from Electronic Arts. With over 200 publications that have been cited thousands of times (including one of Google Scholar’s 10 classic papers in HCI from 2006), she continues to investigate novel ways of understanding players and their experiences, but also develops and evaluates games for preventing, assessing, and treating mental health and games that foster interpersonal relationships. Regan has been the invited keynote speaker at several international game conferences, led Games research in the Canadian GRAND Network, organizes international conferences including the inaugural CHI PLAY, the inaugural CHI Games Subcommittee, and CHI 2018, and leads the first ever Canadian graduate training program on games user research (SWaGUR.ca) with $2.5 million of support from NSERC. She was inducted into the Royal Society of Canada’s College of New Scholars, Artists and Scientists in 2014, received the University of Saskatchewan New Researcher Award in 2015, the Canadian Association for Computer Science’s Outstanding Young Canadian Computer Science Researcher Prize in 2016, and the prestigious E.W.R. Steacie Fellowship in 2018.
Thursday, May 7, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Gregory Welch
For years many of us have been thinking about personal agents—virtual humans (entities) that would be with us all the time—at just the right times and in just the right amounts, to improve our lives. Together with various colleagues I’ve recently been dedicating some thought to the workplace in general, and nursing in particular. While not the driving factor, the current pandemic feels relevant in that front-line healthcare workers are overwhelmed logistically and emotionally, and often deprived of normal personal companionship—coworkers and even family. While not a replacement for human companionship, personal agents would not need to social distance themselves, and could serve a useful purpose—in times like these and in general. I would like to share some philosophical (non-technical) thoughts about such workplace agents, and perhaps spark some fun discussion. Some examples of topics include proactive and even prescient practical help, emotional support, social psychology issues, and engineering vs. social research.
Gregory Welch is a Pegasus Professor and the AdventHealth Endowed Chair in Healthcare Simulation at the University of Central Florida College of Nursing. A computer scientist and engineer, he also has appointments in the College of Engineering and Computer Science and in the Institute for Simulation & Training. Welch earned his B.S. in Electrical Engineering Technology from Purdue University (Highest Distinction), and his M.S. and Ph.D. in Computer Science from the University of North Carolina at Chapel Hill (UNC). Previously, he was a research professor at UNC. He also worked at NASA’s Jet Propulsion Laboratory and at Northrop-Grumman’s Defense Systems Division. His research interests include human-computer interaction, human motion tracking, virtual and augmented reality, computer graphics and vision, and training related applications. His awards include the IEEE Virtual Reality Technical Achievement Award in 2018 (VR 2018), and the Long Lasting Impact Paper Award at the 15th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2016).
Thursday, April 30, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Stephen Brewster
As AR and VR technologies improve, they can replace the standard desktop monitors and workspaces that we use for our office work and productivity. This means we have greater flexibility about how and where we work. I could recreate my multi-monitor office setup at home just by putting on a headset; I could work in the back of a car or on the train when commuting; or I could even improve my office setup by having displays all around me just for the cost of a headset. In our research at Glasgow, we are investigating how we can use Mixed Reality (MR) to enable this. We are looking at supporting productivity by the design of effective virtual workspaces, allowing users to escape the confines of their physical environment and be immersed in the virtual world, and promoting collaboration with distant others. A key focus is in on travelling and how we can enable travellers to be more productive.
We have identified three key issues that must be solved before working in MR can be successful: being in constrained spaces such as car seats limits our movements and interactions, new social acceptability issues when working in this way, and motion sickness resulting from using MR on the move can significantly reduce our abilities. I will discuss the advantages and disadvantages of working in this new way and how we are attempting to design solutions in our ongoing research on the ViAjeRo project (https://viajero-project.org/).
Stephen Brewster FRSE is a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK where he runs the Multimodal Interaction Group. His main research interest is Multimodal Human-Computer Interaction, sound and haptics and gestures. Brewster received a PhD in the Human-Computer Interaction Group at the University of York. He's organized the Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI). He is also the organiser for the CHI Conference on Human Factors in Computing Systems, alongside Geraldine Fitzpatrick. He has contributed to several scientific books. Brewster was elected a Fellow of the Royal Society of Edinburgh in March 2017.
Thursday, April 23, 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Anna Cox
The COVID-19 pandemic has resulted in a number of governments advising citizens to engage in “social distancing” measures that include working from home where possible and limiting the number of times they leave their homes. Working at home during this time has made the “future of work” a sudden reality for many. Our mobile devices mean we can work from anywhere – as long as we stick to our bedrooms and kitchens. Work can be done at any time – fitted in around home-schooling and other caring responsibilities. Without the usual boundaries between work and home we’re struggling. Drawing on my research investigating the challenges experienced by remote workers, and the strategies they use to overcome them, I’ll talk about how we use our technology to help get work done, reclaim our work-life boundaries and enhance our wellbeing.
Prof Anna L. Cox is a Professor of Human-Computer Interaction and Vice Dean (Equality, Diversity & Inclusion) UCL Faculty of Brain Sciences. She uses theories and methods from social science to study digital technology use in order to help people be happier, healthier and more productive. Her research interests focus on the role of digital technologies in: getting work done (including task design, and personal task management); the experience of being always-on (including dealing with interruptions, and digital work-life boundary management); and in providing digital support for when people are struggling (including interventions to aid focus at work, strategies for managing digital boundaries, and dealing with work-related stress).
Thursday, April 16, 2020 8 AM PST (11 AM EST, 17:00 Central European time)
Speaker: Shamsi Iqbal
Research in the area of productivity and multitasking has to adapt to the changing world anticipating what the future may look like - in particular taking into account growing needs of balancing work and life. The natural direction that work was taking only a few weeks ago has been challenged in recent times, and established norms and practices are adapting as a results. Are these changes for the better? Are we gaining a much more realistic view of what true balance of work and life may look like?
My research has focused on redefining productivity where doing work is no longer confined to being at a desk and the need to do things while on the go and in divided attention situations continues to dominate. Artificial intelligence is also fundamentally challenging what we envision as the future of work. Because now we can have machine intelligence supplement human intelligence - people can be empowered to attempt and achieve beyond what was once thought possible. However, to blend in the needs of emotional and physical well-being, we need to look at this challenge using a human-centric approach. This work brings together theories from cognitive science, human computer interaction and artificial intelligence. I will discuss a few ongoing projects in this area and present directions for research and product development.
Dr. Shamsi T. Iqbal is a Principal Researcher in the Information and Data Sciences (IDEAS) group in Microsoft Research, AI. Her primary expertise is in the domain of Attention Management and Interruptions. More recently her work has focused on redefining productivity, introducing novel ways of being productive and balancing productivity and well-being in interaction design. Her work on driving and distraction has been featured in the New York Times, MIT Tech Review among others, and also featured in the King 5 News (NBC affiliate in the Seattle area). Shamsi has served on many organizing and program committees for HCI conferences, is currently serving as an ACM TOCHI Associate Editor and will be serving as the General Co-Chair for UIST 2020. She received her Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2008 and Bachelors in Computer Science and Engineering from Bangladesh University of Engineering and Technology in 2001.