You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/components/DemoPage.jsx
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -122,7 +122,7 @@ if __name__ == "__main__":
122
122
margin: '0',
123
123
fontSize: '15px'
124
124
}}>
125
-
Human-computer interaction has long imagined technology that understands us-from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific apps, and incapable of the flexible reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that learns about you by observing any interaction you have with your computer. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted propositions that capture the user's knowledge and preferences.
125
+
Human-computer interaction has long imagined technology that understands us—from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific apps, and incapable of the flexible reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that learns about you by observing any interaction you have with your computer. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted propositions that capture the user's knowledge and preferences.
Above is a collection of propositions that a GUM might make about a user based on their computer use. Drag the slider to see propositions update based on various activies during the day.
188
+
Above is a collection of propositions that a GUM might make about a user based on their computer use. Drag the slider to see propositions update based on various activities during the day.
189
189
</p>
190
190
</div>
191
191
</div>
@@ -213,7 +213,7 @@ if __name__ == "__main__":
213
213
margin: '0',
214
214
fontSize: '15px'
215
215
}}>
216
-
Any application that might rely on unstructured user context could benefit from a GUM. We create a new class of proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf based on the their GUM. GUMBO discovers helpful suggestions, determines if a suggestion is worth showing to a user and executing, and then executes the (sandboxed) suggestion to the best of its ability---sharing preliminary results with the user.
216
+
Any application that might rely on unstructured user context could benefit from a GUM. We create a new class of proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf based on their GUM. GUMBO discovers helpful suggestions, determines if a suggestion is worth showing to a user and executing, and then executes the (sandboxed) suggestion to the best of its ability---sharing preliminary results with the user.
217
217
</p>
218
218
</div>
219
219
@@ -339,7 +339,7 @@ if __name__ == "__main__":
339
339
margin: '0',
340
340
fontSize: '15px'
341
341
}}>
342
-
For GUMs, privacy guarantees are critical from the start. Our general engineering principle here is to rely primarily on open-source models for our study. While closed-source models are more performant, we expect open-source models to be owned by individual users and eventually distilled to be run on local devices. Our study was deployed and run with open-source models. As gaps between closed and opensourced models close and as models become cheaper for inference, model's will become more performant and feasible on commodity hardware. Our implementation is open-source (available on <ahref="https://github.com/generalusermodels/gum"target="_blank"rel="noopener noreferrer"style={{color: '#ff9d9d'}}>GitHub</a>) and uses the OpenAI Completions API. Opensource inference platforms like vLLM support the Completions API, and work with systems like GUM.
342
+
For GUMs, privacy guarantees are critical from the start. Our general engineering principle here is to rely primarily on open-source models for our study. While closed-source models are more performant, we expect open-source models to be owned by individual users and eventually distilled to be run on local devices. Our study was deployed and run with open-source models. As gaps between closed and open-sourced models close and as models become cheaper for inference, models will become more performant and feasible on commodity hardware. Our implementation is open-source (available on <ahref="https://github.com/generalusermodels/gum"target="_blank"rel="noopener noreferrer"style={{color: '#ff9d9d'}}>GitHub</a>) and uses the OpenAI Completions API. Open-source inference platforms like vLLM support the Completions API, and work with systems like GUM.
343
343
</p>
344
344
345
345
@@ -358,7 +358,7 @@ if __name__ == "__main__":
358
358
margin: '0',
359
359
fontSize: '15px'
360
360
}}>
361
-
In our technical evaluations, we first focus on validating GUM accuracy. We train GUM on recent email interaction, feeding each email---metadata, attachments, links, and replies---sequentially into the GUM. N=18 participants judged propositions generated by GUMs as overall accurate and well-calibrated: unconfident when incorrect, and confident when correct. Highly confident propositions (confidence = 10) were rated 100% accurate, while all propositions on average---including ones with low confidence---were fairly accurate (76.15%). From ablation studies, we show that all GUM components are critical for accuracy.
361
+
In our technical evaluations, we first focus on validating GUM accuracy. We train GUM on recent email interactions, feeding each email---metadata, attachments, links, and replies---sequentially into the GUM. N=18 participants judged propositions generated by GUMs as overall accurate and well-calibrated: unconfident when incorrect, and confident when correct. Highly confident propositions (confidence = 10) were rated 100% accurate, while all propositions on average---including ones with low confidence---were fairly accurate (76.15%). From ablation studies, we show that all GUM components are critical for accuracy.
362
362
363
363
<divclassName="graph-figure-container">
364
364
<img
@@ -398,7 +398,7 @@ if __name__ == "__main__":
398
398
fontSize: '15px',
399
399
}}
400
400
>
401
-
We built a fully-functional macOS GUMBO client. Below are a couple of
401
+
We built a fullyfunctional macOS GUMBO client. Below are a couple of
402
402
screenshots that showcase the interface and key features.
0 commit comments