Skip to content

Commit 68a29f7

Browse files
committed
updated publications and lab members
1 parent 4447357 commit 68a29f7

File tree

56 files changed

+1757
-112
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+1757
-112
lines changed

_people/2025-Aiden-Li.md

Lines changed: 10 additions & 0 deletions

_people/2025-Ziru-Wei.md

Lines changed: 10 additions & 0 deletions

_people/2025-Christina-Yang.md renamed to _people/alumni/2025-Christina-Yang.md

Lines changed: 1 addition & 0 deletions
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
---
2+
layout: article
3+
4+
publication-date: 2025-03-24
5+
title: "A Dynamic Bayesian Network Based Framework for Multimodal Context-Aware Interactions"
6+
authors:
7+
- Violet Yinuo Han
8+
- Tianyi Wang
9+
- Hyunsung Cho
10+
- Kashyap Todi
11+
- Ajoy Savio Fernandes
12+
- Andre Levi
13+
- Zheng Zhang
14+
- Tovi Grossman
15+
- Alexandra Ion
16+
- Tanya Jonker
17+
venue: In Proceedings of IUI ’25. Cagliari, Italy. March 24 -27, 2025
18+
type:
19+
- Conference
20+
- Full Paper
21+
- Peer-reviewed
22+
tags:
23+
- Computational Interaction
24+
- Dynamic Bayesian Networks
25+
- Multimodal Interaction
26+
- Context-Aware Interaction
27+
- Bayesian Inference
28+
- Large Language Models
29+
- User Modeling
30+
31+
video: https://youtu.be/rb8KfYchya8
32+
video-thumb: https://youtu.be/rb8KfYchya8
33+
34+
image: teaser.png
35+
pdf: paper.pdf
36+
doi: https://dl.acm.org/doi/full/10.1145/3708359.3712070
37+
38+
39+
---
40+
41+
<p>
42+
Multimodal context-aware interactions integrate multiple sensory inputs, such as gaze, gestures, speech, and environmental signals, to provide adaptive support across diverse user contexts. Building such systems is challenging due to the complexity of sensor fusion, real-time decision-making, and managing uncertainties from noisy inputs. To address these challenges, we propose a hybrid approach combining a dynamic Bayesian network (DBN) with a large language model (LLM). The DBN offers a probabilistic framework for modeling variables, relationships, and temporal dependencies, enabling robust, real-time inference of user intent, while the LLM incorporates world knowledge for contextual reasoning. We demonstrate our approach with a tri-level DBN implementation for tangible interactions, integrating gaze and hand actions to infer user intent in real time. A user evaluation with 10 participants in an everyday office scenario showed that our system can accurately and efficiently infer user intentions, achieving 0.83 per frame accuracy, even in complex environments. These results validate the effectiveness of the DBN+LLM framework for multimodal context-aware interactions.
43+
</p>

_publications/2025-09-laymo.html

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
---
2+
layout: article
3+
4+
publication-date: 2025-09-27
5+
title: "Transforming Everyday Objects into Dynamic Interfaces using Smart Flat-Foldable Structures"
6+
authors:
7+
- Violet Yinuo Han
8+
- Amber Yinglei Chen
9+
- Mason Zadan
10+
- Jesse T. Gonzalez
11+
- Dinesh K. Patel
12+
- Wendy Fangwen Yu
13+
- Carmel Majidi
14+
- Alexandra Ion
15+
venue: In Proceedings of UIST ’25. Busan, Republic of Korea. Sept. 28 - Oct. 01, 2025
16+
type:
17+
- Conference
18+
- Full Paper
19+
- Peer-reviewed
20+
tags:
21+
- Physical Interfaces
22+
- Robotic Objects
23+
- Compliant Structures
24+
- Kirigami
25+
26+
video: https://youtu.be/XPE4Pmyh2fM
27+
video-thumb: https://youtu.be/l0rnG6rc5NA
28+
video-preview: https://youtu.be/l0rnG6rc5NA
29+
30+
image: teaser.png
31+
pdf: paper.pdf
32+
doi: https://doi.org/10.1145/3746059.3747720
33+
34+
35+
---
36+
37+
<p>
38+
Dynamic physical interfaces are often dedicated devices designed
39+
to adapt their physical properties to user needs. In this paper, we
40+
present an actuation system that allows users to transform their
41+
existing objects into dynamic physical user interfaces. We design
42+
our actuation system to integrate as a self-contained locomotion
43+
layer into existing objects that are small-scale, i.e., hand-size rather
44+
than furniture-size. We envision that such objects can act as col-
45+
laborators: as a studio assistant in a painter’s palette, as tutors in a
46+
student’s ruler, or as caretakers for plants evading direct sunlight.
47+
</p>
48+
<p>
49+
The key idea is to decompose the actuation into (1) energy input
50+
and (2) steering to achieve a flat form factor. The energy input
51+
is provided by simple vibration. We implement steering through
52+
differential friction controlled by flat-foldable compliant structures
53+
that can be activated electrically. We study the mechanism and its
54+
performance, and show its application scenarios enabling dynamic
55+
interactions with objects.
56+
</p>
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
layout: article
3+
4+
publication-date: 2025-09-27
5+
title: "Towards Unobtrusive Physical AI: Augmenting Everyday Objects with Intelligence and Robotic Movement for Proactive Assistance"
6+
authors:
7+
- Violet Yinuo Han
8+
- Jesse T. Gonzalez
9+
- Christina Yang
10+
- Zhiruo Wang
11+
- Scott E. Hudson
12+
- Alexandra Ion
13+
venue: In Proceedings of UIST ’25. Busan, Republic of Korea. Sept. 28 - Oct. 01, 2025
14+
type:
15+
- Conference
16+
- Full Paper
17+
- Peer-reviewed
18+
tags:
19+
- physical AI
20+
- tangible interfaces
21+
- human-AI Interaction
22+
- agents
23+
- robotic objects
24+
- intention inference
25+
- proactive assistance
26+
- large language models
27+
28+
29+
video: https:https://youtu.be/x3IRVDR3SjM
30+
video-thumb: https://youtu.be/sWMiPVagiBs
31+
video-preview: https://youtu.be/sWMiPVagiBs
32+
33+
image: teaser.jpg
34+
pdf: paper.pdf
35+
doi: https://doi.org/10.1145/3746059.3747726
36+
37+
38+
---
39+
40+
<p>
41+
Users constantly interact with physical, most often passive, objects.
42+
Consider if familiar objects instead proactively assisted users, e.g.,
43+
a stapler moving across the table to help users organize documents,
44+
or a knife moving away to prevent injury as the user is inatten-
45+
tively about to lean against the countertop. In this paper, we build
46+
on the qualities of tangible interaction and focus on recognizing
47+
user needs in everyday tasks to enable ubiquitous yet unobtrusive
48+
tangible interaction. To achieve this, we introduce an architecture
49+
that leverages large language models (LLMs) to perceive users’
50+
environment and activities, perform spatial-temporal reasoning,
51+
and generate object actions aligned with inferred user intentions
52+
and object properties. We demonstrate the system’s utility provid-
53+
ing proactive assistance with multiple objects and in various daily
54+
scenarios. To evaluate our system components, we compare our
55+
system-generated output for user goal estimation and object action
56+
recommendation with human-annotated baselines, with results
57+
indicating good agreement.
58+
</p>

_site/404.html

Lines changed: 2 additions & 2 deletions
Large diffs are not rendered by default.

_site/assets/people/2025-Aiden-Li.jpg

381 KB

_site/assets/people/2025-Ziru-Wei.jpg

1.2 MB
3.09 MB
Binary file not shown.

0 commit comments

Comments
 (0)