The topic for CS2520R Fall 2025 is Neurosymbolic Programming. Large language models have had remarkable performance among a variety of tasks, but when applied to the synthesis and evaluation of programs and proofs, these models often generalize poorly beyond their training distribution, violate syntactic or semantic constraints, and produce outputs that cannot be verified. In contrast, symbolic AI can easily reason about symbolic abstractions, but it has problems with brittleness and learning at scale. To bridge this gap, this course examines how hybrid systems combining both neural and symbolic components can learn to synthesize and evaluate formal logic proofs and programs, solve constraint satisfaction problems, and enforce syntactic and semantic guarantees ranging from strict invariants to probabilistic constraints.
We will study neurosymbolic systems with applications that include theorem proving, program synthesis, structured language generation, visuolinguistic reasoning, and planning and constraint satisfaction problem solving. These case studies will ground foundational questions: What inductive biases support symbolic generalization? How can discrete symbolic structures interface with continuous neural representations without sacrificing formal correctness? How can neurosymbolic models advance scientific and mathematical discovery?
As a student, you will present a neurosymbolic paper of your choice, implement a common warm‑up assignment, and complete a semester‑long final project taking an original idea from design through implementation. By the end of the term, you'll have both the theoretical foundations and hands‑on experience needed to design and implement AI systems that formally reason with rigor and flexibility.