Skip to content

Week 5. Feb. 7: Transformers for Multi-Agent Simulation - Possibilities #12

@avioberoi

Description

@avioberoi

Pose a question about one of the following articles:

“In Silico Sociology: Forecasting COVID-19 Polarization with Large Language Models” by Austin C. Kozlowski, Hyunku Kwon, and James A. Evans. This paper demonstrates how LLMs can serve as a tool for sociological inquiry by enabling accurate simulation of respondents from specific social and cultural contexts. Applying LLMs in this capacity, we reconstruct the public opinion landscape of 2019 to examine the extent to which the future polarization over COVID-19 was prefigured in existing political discourse. Using an LLM trained on texts published through 2019, we simulate the responses of American liberals and conservatives to a battery of pandemic-related questions.

“Generative agents: Interactive simulacra of human behavior.” Park, Joon Sung, Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. UIST. This paper introduces “generative agents,” advanced computational software agents capable of simulating realistic human behaviors such as daily routines, artistic creation, and social interactions, leveraging large language models for enhanced believability. These agents, which can remember, plan, and reflect using natural language, are showcased in an interactive environment inspired by “The Sims,” demonstrating their ability to autonomously perform complex social behaviors, like organizing and attending a Valentine's Day party, thereby offering new possibilities for interactive applications and realistic behavior simulation.

“Out of one, many: Using language models to simulate human samples.” Argyle, Lisa P., Ethan C. Busby, Nancy Fulda, Joshua R. Gubler, Christopher Rytting, and David Wingate. 2023. Political Analysis. This paper explores the potential of language models to serve as accurate proxies for diverse human subpopulations in social science research. By creating "silicon samples" conditioned on sociodemographic backstories, they demonstrate GPT-3's remarkable "algorithmic fidelity", showing its ability to emulate complex human response patterns across various groups, thereby offering a novel and valuable tool for gaining deeper insights into human attitudes and societal dynamics.

“Simulating social media using large language models to evaluate alternative news feed algorithms.” Törnberg, Petter, Diliara Valeeva, Justus Uitermark, and Christopher Bail. 2023. arXiv. This paper investigates the use of Large Language Models (LLMs) and Agent-Based Modeling as tools to simulate social media environments, aiming to understand how different news feed algorithms influence the quality of online conversations.

“Jury learning: Integrating dissenting voices into machine learning models” Gordon, Mitchell L., Michelle S. Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S. Bernstein. 2022. CHI. This paper introduces “jury learning,” a novel machine learning approach that addresses the challenge of reconciling diverse societal perspectives in tasks such as toxicity detection. Unlike traditional supervised ML that relies on majority voting, jury learning incorporates the metaphor of a jury to define which groups and in what proportions influence the classifier’s predictions.

The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery” by C. Lu, C. Lu, R. Tjarko Lange, J. Foerster, J. Clune, and D. Ha. 2024. This paper explores how to generate a scientific environment by role-playing all of the institutional perspectives associated with collective scientific advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions