Scarecrows in Oz: Large Language Models in Human-Robot Interaction

In this workshop we want to explore the ways HRI researchers are using LLMs as “Scarecrows” – components that (possibly brainlessly) approximate pieces of desired functionality that are currently hard to achieve – much as HRI researchers use Wizard of Oz in empirical experiments. And, just as HRI researchers have historically discussed, the pros, cons, and need for reporting guidelines surrounding Wizard of Oz, we’re equally interested in this workshop in exploring the pros, cons, and need for reporting guidelines surrounding the uses of LLMs in HRI.

Topics of interest

In recent years, Large Language Models (LLMs) have become the focus of intense interest in the AI community and their use in interactive robots for academic research and commercial products has had equal interest; however, there do not currently exist guidelines for or categorizations of their use in various application spaces in Human-Robot Interaction (HRI).

This workshop invites academic researchers and industry professionals who are actively using or are interested in using LLMs for HRI, and who can contribute to the development of high-level, community-wide guidelines for how LLMs can fit correctly and defensibly into the future of HRI research and development.

Relevant topics to this workshop will include HRI studies that either directly or indirectly involve LLMs, as well as HRI studies that utilize the idea of “Scarecrows” (i.e. using LLMs to provide placeholder functionality, similar to Wizard-of-Oz studies) within a larger HRI system; however, we also encourage broader questions and contributions regarding how these models should be conceptualized within frameworks for effective, responsible HRI.

We invite research regarding contributions, commentary, and questions about (and combinations of) the following topics of interest:

  • the impact of "stubbing out" software modules as "Scarecrows" (traditionally done with humans in Wizard-of-Oz contexts) by using LLMs when building and testing larger HRI experiments;
  • opportunities and applications of LLMs in HRI;
  • risks and perils of LLMs in interactive robots;
  • reporting guidelines, ethical considerations, or real-world implications of LLMs in HRI;
  • safety of LLM-driven interaction, and fine-tuned LLMs during interactions with the world or users; and/or
  • position or framing papers on the role of LLMs in HRI.

Submission details

This workshop is generally focused on any topics related to the opportunities, risks, and guidelines for the use and reporting of Large Language Models in Human-Robot Interaction scenarios.

If your work focuses on development, implementation, training, evaluation, and deployment of LLMs in HRI, please also consider our companion workshop: [Human – Large Language Model Interaction: The dawn of a new era or the end of it all?].


Authors are invited to submit short papers of 2-4 pages on the topic of the use of LLMs in HRI; submissions are due by February 05, 2024. Submissions should be made using the ACM template; Overleaf provides an appropriate template that may be used. All submissions should be anonymized for blind review. Submissions will be made through EasyChair at the following link: https://easychair.org/conferences/?conf=sohri24

Dates

  • January 15, 2024

    Submission site opens

  • February 05, 2024

    Submission deadline

    Anonymized, short papers, 2-4 pages: Submit here.

  • February 19, 2024

    Submission notification

  • March 04, 2024

    Camera-ready deadline

  • March 11 2024 - Workshop date! (half day)

    March 11, 2024

    Workshop date!

Program


Time (MST)
Item
9:00am-9:10am
Introductions
09:10am-9:35am
Lightning Talks
  • [PDF] Toward LLM-Powered Social Robots for Supporting Sensitive Disclosures of Stigmatized Health Conditions
    Alemitu Bezabih, Shadi Nourriz and Estelle Smith
  • [PDF] Enhancing Human-Robot Interaction with Multimodal Large Language Models
    Jorge Ortiz, Matthew Grimalovsky and Hedaya Walter
  • [PDF] Large Language Models as Proxies for Evaluating Collaborative Norms
    Michelle Zhao, Hao Zhu, Reid Simmons, Yonatan Bisk and Henny Admoni
  • [PDF] Exploring the Utilities of LLM’s in Robot-Supported Mindfulness Practices
    Shrirang Patil, Theing Mwe Oo and Heather Knight
  • [PDF] Hidden Scarecrows: Potential Consequences of Inaccurate Assumptions About LLMs in Robotic Moral Reasoning
    Terran Mott and Tom Williams
09:35am-10:05am
Thunder Talk 1 - Casey Kennington
10:05am-10:30am
Coffee Break and Networking
10:30am-11:00am
Thunder Talk 2 - Laurel Reik
11:00am-11:20am
Panel / Q&A
11:20am-12:20pm
Brainstorm Breakout Session
12:20pm-12:50pm
Brainstorm Reporting and Discussion
12:50pm-13:00pm
Closing Remarks and Next Steps



Thunder Talks

Casey Kennington

Casey Kennington

Robots and LLMs: Advancements and Challenges

Laurel Riek

Laurel Riek

Stuffed With Straw: Information Shaped Sentences (LLMs)
& the Future of HRI

Organizers

For questions and more information, please contact the organizers at hri2024scarecrows@gmail.com

Cynthia Matuszek

Associate Professor
University of Maryland, Baltimore County

Nick DePalma

Independent Scientist
Semio Community

Ross Mead

Founder and CEO
Semio

Tom Williams

Assistant Professor
Colorado School of Mines

Ruchen "Puck" Wen

Postdoc
University of Maryland, Baltimore County