This is an open-source experimental platform for studying how people learn compositional rules from examples. Participants see hands of playing cards labeled by category and must discover the hidden rule that generated them. Rules are specified in typed lambda calculus, enabling principled metrics of complexity and systematic study of transfer between structurally related rules.
The platform is part of a research project on self-explanation and program induction, investigating whether prompting learners to explain material to themselves shifts them from shallow feature-matching toward deeper compositional understanding. A companion computational modeling system (DreamCoder-inspired program synthesis) generates quantitative predictions that are tested against behavioral data collected here.
The links below are organized into three groups. Main Experiments contains the two experimental paradigms that participants complete — these are the core data-collection instruments of the project. Analysis Tools provides interactive visualizations used for experiment design (selecting rule pairs, assessing confusability). Other includes standalone components (tutorial, protocol recording) that can be explored independently. Together, these components form the full experimental pipeline: we design rule sets using the analysis tools, deploy experiments via the main paradigms, and collect behavioral data that is compared against predictions from the companion computational model.
Active development (February 2026): This platform is under active development and is meant to showcase the current stage of the project. The standard experiment and two-category paradigm are functional, but the interface, rule set, and experimental flow may still be modified as the project progresses. Other components (self-explanation conditions, verbal protocol recording, analysis tools) are at various stages of completion — some may behave unexpectedly or show placeholder content. Source code: github.com/konukcan/card-games (MIT License).
The experiment is fully configurable via URL parameters. The controls below let you adjust settings that are automatically appended to all experiment links on this page. For a quick preview, just leave the defaults and click any experiment link. Key options: Deck Size controls how many cards are in play (smaller decks make rank-based patterns more salient); Gallery Size sets how many labeled examples participants see initially; Self-Explain enables the self-explanation prompt (the core experimental manipulation); Neg. Sampling controls whether non-matching hands are generated by random draw or by mutating a single card from a matching hand (mutation creates harder, more controlled contrasts).
| Parameter | Values | Effect |
|---|---|---|
save=local | local | Force CSV download (no DataPipe) |
clear=1 | 1 | Clear session state (fresh start) |
deckSize | 52, 32, 28 | Deck size (32 = 7-A only) |
gallerySize | 3, 4, 5 | Hands per category in gallery |
negativeSampling | mutation, random | How losing hands are generated |
curriculum | 1-1 to 12-4 | Latin square condition |
rule | Rule ID | Single rule mode |
testMode | ternary, binary | A/B/Neither or A/B only (two-cat) |
stimuli | fixed, random | Fixed JSON or random sampling (two-cat) |
pairId | e.g. r60x_r23x | Force specific pair (two-cat single-pair) |
condition | c1-1 to c2-8 | Latin square condition (two-cat curriculum) |