Sunday, December 11, 2022

Human Centered AI Live Stream: Sota Researcher!

The research engineer Phil Butler from our lab is starting a new live stream on Human Centered AI. Through his live stream he will help you design and implement AI for people.
  • In each live stream you will learn how to design and create AI for people from start to finish. He will teach you how to use different design methologies such as mockups,storyboards, service design; as well as different AI models and recent state of the art techniques. In each live stream you will have code that will help you to have a complete AI for people project.
  • Some of the topics he will cover in his live stream include: Understanding and Detecting Bias in AI; Design principles for Designing Fair and Just AI; How to Create Explainable AI.
  • The streams will benefit anyone who wants to learn how to create AI on their own, while also respecting human values.
  • The stream will help people to learn about how to implement AI using state of the art techniques (which is key for getting top industry jobs), while also being ethical and just about the AI that is created.


Join us! https://www.youtube.com/@sotasearcher

Designing Public Interest Tech to Fight Disinformation

Our research lab organized a series of talks with NATO around how to design public interest infrastructure to fight disinformation globally. Our collaborator Victor Storchan wrote this great piece on the topic:
Disinformation has increasingly become one of the most preeminent threats as well as a global challenge for democracies and our modern societies. It is now entering in a new era where the challenge is twofold1: it has become both a socio-political problem and a cyber-security problem. Both aspects have to be mitigated at a global level but require different types of responses.
Let’s first give some historical perspective.
  • Disinformation didn’t emerge with automation and social networks platforms of our era. In the 1840s Balsac was already describing how praising or denigrating reviews were spreading in Paris to promote or downgrade publishers of novels or the owners of theaters. Though, innovation and AI in particular gave rise to technological capabilities to threat actors that are now able to scale misleading content creation.
  • More recently, in the 2000s the technologists were excited about the ethos around moving fast and breaking things. People were basically saying “let’s iterate fast, let’s shift quickly and let's think about the consequences later.”
  • After the 2010s, and the rise of deep learning increasingly used in the industry, we have seen a new tension emerging between velocity and validation. It was not about the personal philosophy of the different stakeholders asking for going “a little bit faster” or “a little bit slower” but rather about the cultural and organizational contexts of most of the organizations.
  • Now, AI is entering the new era of foundation models. With large language models we have consumer-facing powered tools like search engines or recommendation systems. With generative AI, we can turn audio or text in video at scale very efficiently. The technology of foundation models is at the same time becoming more accessible to users, cheaper and more powerful than ever. It means better AI to achieve complex tasks, to solve math problems, to address climate change. However, it also means cheap fake media generation tools, cheap ways to propagate disinformation and to target victims.

This is the moment where we are today. Crucially, disinformation is not only a socio-political problem but also a cyber-security problem. Cheap deep-fake technology is commoditized enabling targeted disinformation where people will receive specific, personalized disinformation through different channels (online platforms, targeted emails, phone). It will be more fine grained. It has already started to affect their lives, their emotions, their finances, their health etc.

The need for a multi-stakeholder approach as close as possible to the AI system design.The way we mitigate disinformation as a cyber-security problem is tied to the way we are deploying large AI systems and to the way we evaluate them. We need new auditing tools and third parties auditing procedures to make sure that those deployed systems are trustworthy and robust to adverse threats or to toxic content dissemination. As such AI safety is not only an engineering problem but it is really a multi stakeholder challenge that will only be addressable if non-technical parties are included in the loop of how we design the technology. Engineers have to collaborate with experts in cognition, psychologists, linguists, lawyers, journalists, civil society in general). Let’s give a concrete example: mitigating disinformation as a cyber security problem means protecting the at-risk user and possibly curing the affected user. It may require access to personal and possibly private information to create effective counter arguments. As a consequence it implies arbitrating a tradeoff between privacy and disinformation mitigation that engineers alone cannot decide. We need a multi stakeholder framework to arbitrate such tradeoffs when building AI tooling as well as to improve transparency and reporting.


The need for a macroscopic multi-stakeholder approach. Similarly, at a macroscopic level, there is a need for a profound global cooperation and coalition of researchers to address disinformation as a global issue. We need international cooperation at a very particular moment in our world which is being reorganized. We are living in a moment of very big paradox: we see new conflicts that emerge and structure the world and at the same time, disinformation requires international cooperation. At the macroscopic level, disinformation is not just a technological problem, it is just one additional layer on top of poverty, inequality, and ongoing strategic confrontation. Disinformation is one layer that adds to the international disorder and that amplifies the other ones. As such, we also need a multi stakeholder approach bringing together governments, corporates, universities, NGOs, the independent research community etc… Very concretely, Europe has taken legislative action DSA to regulate harmful content but it is now clear that regulation alone won’t be able to analyze,detect, and identify fake media. To that regard, the Christchurch call to action summit is a positive first step but did not lead yet to a systemic change.

The problem of communication. However, the communication between engineers, AI scientists and non-technical stakeholders generates a lot of friction. Those multiple worlds don't speak the same language. Fighting disinformation is not only a problem of resources (access to data and access to compute power) but it is also a problem of communication where we need new processes and tooling to redefine the way we collaborate in alliance to fight disinformation. Those actors are collaborating in a world where it is becoming increasingly difficult to understand AI capabilities and as a consequence to put in place the right mechanisms to fight adverse threats like disinformation. It is more and more difficult to really analyze the improvement of AI. It is what Gary Marcus is calling the demoware effect: a technology that is good for demo but not in the real world. It is confusing people and not only political leaders but also engineers (Blake Lemoine at Google). Many leaders are assuming false capabilities about AI and struggle monitoring it. Let us give two reasons to try to find the causes of this statement. First, technology is more and more a geopolitical issue which does not encourage more transparency and more accountability. Second, information asymmetry between the private and public sectors and the gap between the reality of the technology deployed in industry and the perception of public decision-makers has grown considerably, at the risk of focusing the debate on technological chimeras that distract from the real societal problems posed by AI like disinformation and the ways to fight it.

Sunday, November 20, 2022

List of MIT Tech Review Inspiring Innovators!

We are part of the amazing network of the 35 Innovators under 35 by the MIT Tech Review. We got invited to their EmTech Event and had amazing dinner with other innovators and people having an impact in the field. We are very thankful with Bryan Bryson for the invitation, and we also wanted to congratulate him and his team for all the work done to build such a vibrant innovation ecosystem.



I share below a list of some of the innovators I meet. Keep an eye on them!

*Setor Zilevu (Meta and Virginia Tech). Working at the intersection of human-computer interaction and machine learning to create semi-automated, in-home therapy for stroke patients. After his father suffered a stroke, Zilevu wanted to understand how to integrate those two fields in a way that would enable patients at home to get the same type of therapy, including high-quality feedback, that they might get in a hospital. The semi-­automated human-computer interaction, which Zilevu calls the “tacit computable empower” method, can be applied to other domains both within and outside health care, he says.

Sarah B. Nelson is Chief Design Officer and Distinguished Designer for Kyndryl Vital, Kyndryl’s designer-led co-creation experience. From the emergence of the web through the maturity of user experience practice, Sarah is known throughout the design industry as a thought leader in design-led organizational transformation, participatory, and forward-looking design capability development. At Kyndryl, she leads the design profession, partnering with technical strategists to integrate experience ecosystem thinking into the technical solutions. Sarah is an encaustic painter and passionate surfer.

*Moses Namara (Meta and Clemson University). Namara co-­created the Black in Artificial Intelligence graduate application mentoring program to help students applying to graduate school. The program, run through the resource group Black in AI, has mentored 400 applicants, 200 of whom have been accepted to competitive AI programs. It provides an array of resources: mentorship from current PhD students and professors, CV evaluations, and advice on where to apply. Namara now sees the mentorship system evolving to the next logical step: helping Black PhD and master’s students find that first job.

*Joanne Jang (OpenAI). Joanne Jang is the product lead of DALL·E, an AI system by OpenAI that creates original images and artwork from a natural language description. Joanne and her team were responsible for turning the DALL·E research into a tool people can use to extend their creative processes and for building safeguards to ensure the technology will be used responsibly. The DALL·E beta was introduced in July 2022 and now has more than 1 million users.

Daniel Salinas (Colombia) Su ‘start-up’ monitoriza las plantas con nanotecnología al conectarlas con ordenadores y facilita la descarbonización. Los humanos tienen 'ceguera a las plantas'. Nuestros sesgos nos impiden percibirlas como sí hacemos con los animales. Esta desconexión planta-humano lleva a que los proyectos de plantar árboles para capturar carbono frente a la crisis climática no sean sostenibles si la reforestación no se mantiene en el tiempo. El estudiante de Emprendimiento colombiano Daniel Salinas descubrió la falta de infraestructuras en la lucha para la descarbonización con una 'start-up' de plantación de árboles. El joven recuerda: "Cada vez que íbamos al terreno teníamos problemas". Para romper esta desconexión entre personas y árboles, Salinas ha creado una interfaz planta-ordenador que permite hacer un seguimiento de la vegetación con su start-up Superplants. Con esta aportación, Salinas ha logrado ser uno de los Innovadores menores de 35 Latinoamérica 2022 de MIT Technology Review en español.
Girl in a jacket
Relevant References:
-https://www.building-up.org/knowledgehub/innovadores-menores-de-35-latinoamrica-2022
-https://event.technologyreview.com/emtech-mit-2022/speakers
-https://www.technologyreview.com/innovator/setor-zilevu/

Friday, November 11, 2022

Recap: AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2022)

Girl in a jacket

This week we attended the AAAI conference on Human Computation and Crowdsourcing (HCOMP'22). We were excited about attending for several reasons: (1) we were organizing HCOMP's CrowdCamp, excited about having the power to drive the direction of this event within the conference!, (2) it was the 10-year anniversary of the conference and we were elated to reflect collectively on where we have come as a field over the years, (3) we chaired one of the keynotes of HCOMP, in particular, our PhD hero, Dr. Seth Cooper, and (4) we had an important announcement to share with the community!
WE WILL BE GENERAL CO-CHAIRS OF HCOMP’23!

Organizing CrowdCamp.

This year, Dr. Anhong Guo from the University of Michigan and me had the honor of organizing HCOMP's CrowdCamp, a very unique part of the HCOMP conference. It is a type of mini hackathon where you get together with crowdsourcing experts and define the novel research papers and prototypes that push forward the state of the art around crowdsourcing. Previous CrowdCamps led to key papers in the field, such as the Future of Crowd Work paper and my own CHI paper on Subcontracting Micro Work.

This year, when we put out the call for CrowdCamp, we witnessed an interesting dynamic. A large number of participants were students, novices to crowdsourcing, but they had great interest in learning and then impacting the field. This dynamic reminded me of what I had encountered when I organized my first hackathon, FixIT: the participants had great visions and energy for changing the world! But they also had limited skills to execute their ideas. They lacked data to determine if their ideas were actually something worth pursuing. To address these challenges, in the past, I gave hackathon participants bootcamps to ramp up their technical skills (this allowed them to execute some of their visions). We also taught these participants about human centered design to empower them to create artifacts and solutions that match people's needs, and not a hammer in need of nails.

For CrowdCamp, we decided to do a similar thing:
We had a mini-bootcamp, organized by Toloka (a crowdsourcing platform), that explained how to design and create crowd-powered systems. The bootcamp started with a short introduction on what is crowdsourcing, common types of crowdsourcing projects (like image/text/audio/video classification) and interesting ones (like side-by-side comparison, data collection and spatial crowdsourcing). After that, the bootcamp introduced the Toloka platform and some of its unique features. Then the bootcamp briefly presented Toloka Python SDK (Toloka-Kit and Crowd-Kit) and moved to an example project on creating a crowd powered system, especially a face detection one. The code used in the Bootcamp is in the following Google Collab: https://colab.research.google.com/drive/13xef9gG8T_HXd41scOo9en0wEZ8Kp1Sz?usp=sharing.

We taught human centered design, and had a panel with real world crowdworkers who shared their experiences and needs. The participants were empowered to design better crowdworkers and create more relevant technologies for them, as well as technologies that would better coordinate crowdworkers to produce higher quality work. The crowdworkers who participated in CrowdCamp all came from Africa, and they shared how crowd work had provided them with new job opportunities that were typically not available in their country. Crowd work helped to complement their expenses (a side job). They were motivated to participate in crowd work for the additional money received, also knowing that they were contributing to something bigger than themselves (e.g., labeling images that will ultimately help to power self-driving cars.) Some of the challenges these crowdworkers experienced included unpaid training sessions. It was unclear sometimes whether the training sessions were worth it or not. They also discussed the importance of building worker communities.

CrowdCamp ended up being a success with over 70 people who had registered and then created a number of different useful tools for crowdworkers. The event was hybrid with people on the east coast joining us at Northeastern university. We had delicious pizza and given that we were in Boston, delicious Dunkin Donuts :)

Chairing Professor Seth Cooper's Keynote.

We had the honor of chairing the keynote of Professor Seth Cooper, an Associate Professor at the Khoury College of Computer Sciences. He previously worked for Pixar Animation Studios and Electronic Arts, a big game maker. Seth is also the recipient of an NSF career grant. Professor Cooper’s research has focused on using video games and crowdsourcing techniques to solve difficult scientific problems. He is the co-creator and lead designer, as well as developer of Foldit, a scientific discovery game that allows regular citizens to advance the field of biochemistry. Overall, his research combines scientific discovery games (particularly in computational structural biochemistry), serious games, and crowdsourcing games. A pioneer in the field of scientific discovery games, Dr. Cooper has shown video game players are able to outperform purely computational methods for certain types of structural biochemistry problems, effectively codifying their strategies, and integrating them in the lab to help design real synthetic molecules. He has also developed techniques to adapt the difficulty of tasks to individual game players and generate game levels.

Seth’s talk discussed how he is using crowdsourcing to improve video games, and video games to improve crowdsourcing. What does this mean? In his research, Professor Cooper integrates crowd workers to help designers improve their video games. For example, he integrates crowds to help them test just how hard or easy the game they are creating is. It enables designers to identify how easy it is for gamers to advance within the different stages of a game. The integration of crowdworkers allows gamers to easily iterate and improve their video game. Dr. Cooper is also integrating gaming to improve crowdsourcing. In particular, he has studied how he can integrate games to improve the quality of work produced by crowd workers.

During the Questions of Professor Cooper, some interesting questions emerged:
What types of biases do crowdworkers bring to the table when co-designing video games? It was unclear whether crowdworkers are actually similar to how typical gamers would play a video game. Hence, the audience wondered just how much designers actually use the results of the way crowdworkers engage with a video game. Professor Seth mentioned that in his research, he found that crowdworkers are similar to typical gamers in playing games. A difference is that typical gamers (voluntarily playing a game instead of getting paid to play) will usually focus more on the aspects of the game they like the most. Crowdworkers will focus on exploring the whole game instead of focusing on particular parts (because of the role that the payments play). Perhaps, these crowdworkers feel that by exploring the whole game, they are better showcasing to the requester (designer) that they are indeed playing the game and not slacking off. Some people have a gaming style that focuses on the "catch-them all'' approach (an exploratory mode). However, the "catch-them-all" term is used in reference to Pokemon, where people are interested in being able to explore the entire game and collect all the different elements (e.g., Pokemons).

How might we integrate game design to help crowdworkers learn? Dr. Flores-Saviaga posed an interesting question about the role games could play in facilitating the career development of these workers. Professor Seth expressed an interest in this area while also mentioning that you can imagine that workers instead of earning badges within the game could earn real certificates that translate into new job opportunities.

What gave him confidence that the gaming approach in crowdsourcing is worth pursuing? When Foldit came out, it was unclear that gaming would actually be useful for mobilizing citizen crowds to complete complex scientific tasks. The audience wanted to know what led him to explore this path. Professor Cooper explained that part of it was taking a risk down a path he was passionate about: gaming. I think for PhD students and other new researchers starting out, it can be important to trust your intuition and conduct research that personally interests you. In research, you will take risks, which makes it all the more exciting :)

Girl in a jacket

Dr. Jenn Wortman’s Keynote.

We greatly enjoyed the amazing keynote given by Dr. Jenn Wortman Vaughan (@jennwvaughan) at HCOMP 2022. She presented her research in Responsible AI, specially interpretability and fairness in AI systems.

A takeaway is that there are challenges in the design of interpretability tools for data scientists such as IntrepretML or SHAP Python package, where they found that these tools lead to over-trusting and misusing how ML models work. For more info, look at her CHI 2020 paper: "Interpreting Interpretability”

Dr. Jeffrey Bigham’s Keynote.

An incredible keynote given by Dr. Jeffrey Bigham at HCOMP 2022. He presented work developed in Image Description for 17 years! He showed different connections (loops) in finding the right problem and right solution in Image Description, such as Computer Vision, Real-Time Recruitment, Gig Workers, Conversations with the Crowd, Datasets, etc.

A takeaway is that there could be different interactions or loops in the process of Applying Machine Learning and HCI as seen in the image below from the Problem selection until the Deployment of the system.

Doctoral Consortium.

The HCOMP doctoral consortium was led by Dr. Chien-Ju Ho and Dr. Alex Williams. The consortium is an opportunity for PhD students to share their research with crowdsourcing and human computation experts. Students have the opportunity to meet other PhD students, industry experts, and researchers to expand their network and receive mentoring from both industry and academia. Our lab participated in the proposal “Organizing Crowds to Detect Manipulative Content.” A lab member, Claudia Flores-Saviaga, presented the research she has done in this space for her PhD thesis.

Exciting news for the HCOMP community!

The big news I want to share is that I have the honor of being the co-organizer of next year's HCOMP! I will co-organize it with Alessandro Bozzon and Michael Bernstein. We are going to host the conference in Europe, and it will be united with the Collective Intelligence conference. Our theme is about reuniting and helping HCOMP grow in size by connecting with other fields, such as human centered design, citizen science, data visualizations, and serious games. I am excited to have the honor and opportunity to build the HCOMP conference. Girl in a jacket

Wednesday, June 08, 2022

Summer: Work in the Age of Intelligent Machines


photos of dr. savage at the conference


I had the honor to start the summer attending and giving a keynote at the Work in the Age of Intelligent Machines (WAIM) Summer Conference. The conference, which is partially funded by the US National Science Foundation aims to create a network of researchers that come together on the topic of work in the age of intelligent machines. In this post I will share a little bit of the discussions and activities we had at WAIM. 


Opening Remarks.

The conference started with opening comments from the conference organizers, whose profiles I share below. Note that I share their profiles because I find interesting how diverse the organizers were (which I think was key for organizing this type of conference):

Dr. Kevin Crowston. Distinguished Professor of Information Science at Syracuse University. He received his Ph.D. in Information Technologies from the Massachusetts Institute of Technology (MIT). His research examines new ways of organizing by the use of information technology. He approaches this issue in several ways: empirical studies of coordination-intensive processes in human organizations (especially virtual organization); theoretical characterizations of coordination problems and alternative methods for managing them; and design and empirical evaluation of systems to support people working together.
Dr. Jeffrey Nickerson. Professor and Associate Dean of research in the School of Business at Stevens Institute of Technology. His research focuses on different aspects of collective creativity, in particular the way crowds and communities design digital artifacts: 3D printing designs, systems designs, source code, and articles. He has a Ph.D. in Computer Science.
Dr. Ingrid Erickson. Associate Professor of Information Science at Syracuse University. Ph.D. from the Center for Work, Technology and Organization in the Department of Management Science and Engineering at Stanford University. She is a scholar of work and technology, currently fascinated by the way that mobile devices and ubiquitous digital infrastructures are influencing how we communicate with one another, navigate and inhabit spaces, and engage in new types of socio-technical practices.

The opening remarks began with an overview of what WAIM had become, how the network had grown, and impact obtained through time. The main goals of WAIM was to create a network of researchers that push forward together investigations on work and machines. For instance, what role do machines have in labor? How do they change the work dynamics? What type of new science fiction realities do we create from integrating intelligent machines in the workplace? What type of futures around work and machines do we want to avoid? What new power dynamics emerge from integrating machines in the workplace? Personally, WAIM has become a key space to build key research collaborations. From prior WAIM conferences, I was able to start working with Professor Jarrahi from UNC, Professor Matthew Lease from UT Austin, Professor Steve Sawyer from Sycracruse University, and Professor Michael Dunn from the University of Albany. Overall WAIM became a great place to connect with academics in the United States who also had an interest in the future of work, and wanted to push forward a new future of work reality with machines. I have particularly liked that I have been able to connect with academics who are not just in computer science, but also in economy, information schools, business schools. The diversity has certainly helped bring new perspectives to my research in the space. It was inspiring and energizing way to start the conference with excellent opening statements and remembering all that we had been able to start building together around work and intelligent machines.

Opening Keynote: Saiph Savage.

I then gave the opening keynote. You can access my slides here: XX. Overall, I discussed how to design meta systems that can empower gig workers to be able to design and create their own tools. We had very interesting discussions around how to motivate workers to collaborate with each other to create their own tools (collaborations are hard because gig platforms also promote competition between workers). I also discussed how to facilitate quality data sharing between workers, and how might you engage the other stakeholders to participate in the design process (I use value sensitive design in this front).

Panel: "Illuminating the Human-Technology Frontier" with Bledi Taska (Emsi Burning Glass), Nick Diakopoulus (Northwestern), Sarah Leibovitz (University of Virginia).


Afterwards, we had a panel on "Illuminating the Human-Technology Frontier", with participation from Bledi Taska (Emsi Burning Glass), Nick Diakopoulus (Northwestern), Sarah Leibovitz (University of Virginia), and the moderator was Jeffrey Nickerson. Dr. Bledi works at Emsi Burning Glass which provides the nation’s premier labor market data. Dr. Bledi discussed about how jobs are changing. From the data in his company he has observed that most changes occurs in jobs that involve technology, and what changes the most are the type of key skills that these jobs need (i.e., top 40 skills needed for the job). Professor Diakopoulus then discussed the role that AI plays in journalism and argued for the importance of better understanding the values that exist in each profession, to define how data looks like, what defaults are chosen, what inputs the system has. It is important to understand the tensions that can exist when defining technology for journalists, especially based on their values. For example, you have to think about: Is there sufficient transparency in the AI so that the journalist feels comfortable with the results? Overall, Prof. Diakopoulus argued for the importance of thinking about the intelligent systems and how to design them to match the values of a profession. He also argued for the importance of thinking about the edge that humans have over AI. Journalists have to negotiate with sources to get their information out of them. That is something that AI still cannot do.  Additionally, the journalists need to have creativity in how they communicate and present their story to effectively engage their audience. So, it is important to think about what is the edge that humans have, how might we design technology that does not focus on replacing journalists, but instead, enhances their work. Within this setting of intelligent systems it is also important to think about how intelligent systems have also made the job of journalists more difficult. Particularly, e.g., we have now a number of bots that are contaminating the information ecosystem and journalists have to learn how to navigate and cut through the automated systems that might be spreading political lies. 


Panel: Future of Work at the Human-Technology Frontier,
with NSF Program Director Andruid Kerne.

After the panel we had a fantastic lunch where conversations around the future of work with intelligent machines were common. We then heard insights Federal Government Employee, Dr. Kerne, who currently works at the US National Science Foundation (NSF), and is a Program Director for NSF's Future of Work at the Human-Technology Frontier program. He discussed what makes good research proposals for this program. He argued that good research proposals are proposals that include multiple views: the perspectives of workers, the insights about the new technologies that will be developed, and how it ultimately affects work. The program directors of this program also considered that if your research could go into another program by NSF, then you should just send it there.  This highlight the importance of identifying what research is indeed very unique to the Future of Work at the Human-Technology Frontier. Dr. Kerne also discussed that for this program it was also important for the intellectual merit to involve multiple fields. The program has a commitment to look for proposals that bring together different fields, and that also function across fields, not just the silos of fields. He argued that research across different fields is necessary to address issues around the future of work. 

I also found particularly interesting that Dr. Kerne argued that for the program it was important to also think about and consider the negative broad impacts that can emerge in the research that is proposed. For instance, how might the technologies that researchers want to study, also facilitate the surveillance of workers, the deterioration of their work conditions, and also how might it result in the digital privacy violations of workers. 

Finally, Dr. Kerne presented examples of successful projects that had been funded. The projects were interesting precisely because they combined multiple fields. For example, there were projects around empowering the labor and entrepreneurship of indigenous communities working in computational ceramics. There were also research projects on Occupational Exoskeletons that involved the areas of Mechanical engineering, sociology, and economics. Other funded projects were on the Future of Automation in the Hospitality Industry (which also included academics in healthcare, HCI, and design). Overall, I found the talk from Dr. Kerne useful to better understand the things NSF's program on the future of work might value to further ensure my success when I apply to future grants :)

Panel: WAIM Fellows Presentations.

A neat thing from this conference was that they were able to also have fellows! The conference funded the year long research of PhD students who conducted investigations on "Work in the Age of Intelligent Machines" ! The fellows included PhD students from UT Austin, Carnegie Mellon University,  Georgia Tech, among others. It was also inspiring to see that all the PhD students who were funded were women! Their research included:
  • Investigating how AI Can be Integrated in Hiring Decisions (UT Austin). Here the research argued that when people are working with AI based tools, they are not always experts, and they can have very different backgrounds and experiences with AI, which can influence how they use the technology. This research focused on studying how people's  background in AI impacts how they use AI, as well as how they use the information that AI provides to them within the hiring process. 

    The research first focused on understanding people's AI literacy. The work focused first on being able to measure people's literacy in AI and then studying how the literacy that was detected impacted how these individuals made decisions around hiring with AI.  Some challenges within this research is: how do you measure AI literacy? In this particular work,  they ended up using a taxonomy that already exists, that focuses on measuring how much people know certain AI concepts and how much they are able to create new AI applications. The research also found that it was important to study what people's understanding of AI is. People had very different perspectives about what AI was and what AI could do.  Such understanding is important as it affects how much individuals trust and use the results that an AI based hiring system outputs to them. Some of the things the research focused on measuring around people's understanding of AI, was how much people knew that there were humans involved in the process of developing algorithms? Do they have a good idea of how algorithms work? What do people think of AI integrated in the decision process? Do they trust it? Do people have positive/negative opinions of AI? Do they trust it or just don’t like it? Currently the research fellow discussed how she is planning on running a national sample to make better sense of people's opinions and knowledge around AI. She will then run an experiment where people run a Job evaluation task with AI to study how people's knowledge and perceptions around AI affects their behavior with the AI based job evaluation results. Based on her findings she plans to create also educational material on how AI can be best used for hiring given people's different backgrounds.


  • Futures of AI Based Care Work (Georgia Tech).  The research studied how AI based technologies could be integrated for rural nurses to help reduce harms, and help them in their jobs. The research is also looking at the new labor that organizations have to take on when new technology  comes into the picture (new types of indivisible labor!)

  • Peer Tools for Gig Workers (CMU). The research discussed how gig workers are on their own to build their brand and figure out the work they have to do. The research of Yasmine, the Phd student leading the work, focuses on designing peer support systems for gig workers. For instance, she designed "Hire Peer", an interface design that offers new formative feedback on creative entrepreneurship so gig workers can help each other to grow and develop themselves as entreponeurs as well. Her research also focuses on helping workers to better brand themselves, and create an identity for themselves. Her research is heavily based on human centered design and participatory design. Given that her research is also interested in the creation of peer support networks of gig workers (i.e., communities) she has been also studying how can we design solutions that are well integrated into communities. She questions: How do we sustain community driven designs? How do we support community based research,  even when there are  institutional barriers that can hinder the analysis and reaching the communities? 

Overall, it was inspiring to see how the WAIM conference was able to fund and push forward the research of new top researchers in the future of work. I liked learning about the new research directions these scholars were exploring. I think to know where the field will go it is important and critical to listen to the new researchers. 


Panel: The Future Work of the Future of Work: MC Binz-Scharf (CUNY), Katie Pine (Arizona State University), Joel Chan (University of Maryland),
Moderator Ingrid Erickson

This panel involved some of the people I admire the most! So it was a trill and pleasure to hear each panelist and the moderator. Dr. MC Binz-Scharf discussed how the future of work is bright. But not for everyone. It can depend on the privilege of the individual. She discussed how there are matrixes of oppression: we never have just one identity and together our different identities can result in different types of discriminations and harms that we can push onto others. It is thus important to think about what are the consequences that exist from the privilege of certain individuals?  For example, in healthcare many times men get the privilege of being studied fully. While women are sometimes considered to be simply "small men". That privilege that men hold bring several misconceptions around best practices for treating women in the healthcare system. There are currently no standard of care for women. Women are thus likely to be misdiagnosed. Black women are 3 times more likely to die in child birth. Inequality is codified in society and in the workplace. There are a number of structures that permit inequality. What type of jobs do women vs men get? The flexibility of gig work and certain types of jobs benefits some but not everyone. It is important to understand the socio-economic differences that can emerge from privilege.  Technology also permeates inequality. Lots of related research, such as the books: Algorithms of Oppression; In Big Data We Trust. People trust the big data algorithms; but the programmer, the coders influence the design of the algorithms and the biases that exist. People trust algorithms without questioning these biases. MC also discussed how "DEI" is a new buzz word that has substitute "work inequality." However, it has also been different than affirming action. We need to understand the changes these new dynamics generate. MC argued for the following research directions: Studying the impact of technology on behavioral changes  and diversity. She argues it is especially important to measure the effect of the DEI initiatives. An interesting point that MC made was that the people who enjoy privilege should take part in DEI initiatives. If you enjoy some privilege join the party and be part of DEI initiatives. She argues their integration is important as these individuals are who have the current power to push forward change. 

Similarly, Katie Pine (Arizona State University) and  Joel Chang (University of Maryland) discussed about  who gets to design the future of work. People in the neighborhood want to participate in the design of the systems that control the neighborhood. But the problem is: Who actually gets to participate in the design? Workers are many times not able to design their own tools. It is a hard problem. How might we empower workers to design? How do we help workers to be proactive, not reactive in the designs they propose? How do we get there? Katie and Joel argued for the importance of creating coalitions to address the problem, as well as conducting Participatory Design. However, integrating participatory design is NOT as straightforward, especially because Participatory Design requieres a lot of work to really involve people in the design process. It can take a lot of work for the true needs to come out.  It is also hard to know where should the design of tools by participants happen? You can bring people into the lab to design. But who gets to make it to the lab? (Many times it is elites who can travel to the laboratory because they might have more free time, or means of getting to the lab). Additionally, you could argue that researchers should just go to where the people are and do the participatory design where the individuals are. The problem with that approach is that there are a number of different barriers to go to their space, but then once you even arrive you have new issues that you also need to learn how to handle, such as power dynamics. The question then is: How do we create a shared space where people can all co-design? How do we design that space so people can really design together? There is also value in studying how much the designs that were created by gig workers were of any good. How do you evaluate them? No clear sense of quality. Is it time for a Meta-design for the future of work?

Keynote 2 Youngjin Yoo (Case Western)

The second keynote was with famous Professor Youngjin Yoo. He is the Elizabeth M. and William C. Treuhaft Professor in Entrepreneurship and Professor of Information Systems at the department of Design & Innovation at the Weatherhead School of Management, Case Western Reserve University. An Association of Information Systems Fellow, he is also WBS Distinguished Research Environment Professor at Warwick Business School, UK. and a Visiting Professor at the London School of Economics, UK. He is the founding faculty director of xLab at Case Western Reserve University. He has worked as Innovation Architect at the University Hospitals in Cleveland, overseeing the digital transformation efforts at one of the largest teaching hospital systems in the country. Before he returns to Case Western Reserve University, he was the Harry A. Cochran Professor of Management Information Systems and the Founding Director of Center for Design+Innovation at the Fox School of Business, Temple University where he was also the founder and Principal Investigator of Urban Apps & Maps Studios, an interdisciplinary initiative for digital urban entrepreneurship in Philadelphia. Previously, he was the Lewis-Progressive Chair of Management at Case Western Reserve University. He has taught digital innovation strategy at Indian School of Business, Aalto University in Finland, and Korean Advanced Institute of Science and Technology. He was a summer research fellow at NASA in summer of 2001 and spent a year as a research associate in 2003 – 2004 at NASA Glenn Research Center. He was also a visiting professor at Chalmers University of Technology in Sweden, Viktoria Institute in Sweden, Hitotsubashi University in Japan, Hong Kong City University, Yonsei University, Korea and Tokyo University of Science, Japan. He holds a PhD in Information Systems from the University of Maryland. His research interests include: digital innovation and entrepreneurship, organizational genetics, societal use of technology and design. He has received over $4.5 million in research grant from National Science Foundation, NASA, James S. and John L. Knight Foundation, the Department of Commerce, National Research Foundation of Korea, and Samsung Electronics. His work was published at leading academic journals such as MIS Quarterly, Information Systems Research, Organization Science, the Communications of the ACM, and the Academy of Management Journal among others. He is Senior Editor of MIS Quarterly, the Journal of AIS, and the Journal Information Technology, and is on the editorial board of Organization Science, Scandinavian Journal of Information Systems, and Information and Organization. He was a former senior editor of the Journal of Strategic Information Systems and an associate editor of Information Systems Research and Management Science. He has worked with leading companies including Samsung Electronics, Samsung Economic Research Institute, American Greetings, Progressive Insurance, Goodyear Tire, Sotera Health, Bendix, Moen, Intel, Ford Motor Company, Andersen Consulting, IDEO, Gehry and Partners, University Hospitals in Cleveland, American Management Systems, Lotus, NASA, Parker Hannifin, Poly One and the Department of Housing and Urban Development. 

These were some of the main activities of the first day. The conference offered amazing breakfast, lunch, and reception events. These social gathering events during the conference were really nice for connecting and networking with other researchers (We also had very nice night walks to the white house! See picture above.) The next day we had in depth discussions on our papers and research proposals. I found this piece of the conference particularly useful for my own work, as I was able to better craft my research contribution in a much better way. I felt I was receiving coaching from top Olympic athletes that were guiding me to also be successful. Overall, it was a fantastic research event.  Very useful for advancing on my papers, proposals, and tools I co-design and create with gig workers.  It also opened new collaborations, and was overall great for pushing forward the network of researchers conducting investigations on Work in the Age of Intelligent Machines :)