A citation-based method for searching scientific literature

Bradley B Doll, Dylan A Simon, Nathaniel D Daw. Curr Opin Neurobiol 2012
Times Cited: 165







List of co-cited articles
1157 articles co-cited >1



Times Cited
  Times     Co-cited
Similarity


Model-based influences on humans' choices and striatal prediction errors.
Nathaniel D Daw, Samuel J Gershman, Ben Seymour, Peter Dayan, Raymond J Dolan. Neuron 2011
741
53



A neural substrate of prediction and reward.
W Schultz, P Dayan, P R Montague. Science 1997
33

Goals and habits in the brain.
Ray J Dolan, Peter Dayan. Neuron 2013
432
24

Neural computations underlying arbitration between model-based and model-free learning.
Sang Wan Lee, Shinsuke Shimojo, John P O'Doherty. Neuron 2014
256
20


Orbitofrontal cortex as a cognitive map of task space.
Robert C Wilson, Yuji K Takahashi, G Schoenbaum, Yael Niv. Neuron 2014
399
19

Cognitive maps in rats and men.
E C TOLMAN. Psychol Rev 1948
16

Model-based choices involve prospective neural activity.
Bradley B Doll, Katherine D Duncan, Dylan A Simon, Daphna Shohamy, Nathaniel D Daw. Nat Neurosci 2015
129
16


Dopamine enhances model-based over model-free choice behavior.
Klaus Wunderlich, Peter Smittenaar, Raymond J Dolan. Neuron 2012
145
15

Dissociable roles of ventral and dorsal striatum in instrumental conditioning.
John O'Doherty, Peter Dayan, Johannes Schultz, Ralf Deichmann, Karl Friston, Raymond J Dolan. Science 2004
15

The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive.
A Ross Otto, Samuel J Gershman, Arthur B Markman, Nathaniel D Daw. Psychol Sci 2013
163
14

Ventral striatal dopamine reflects behavioral and neural signatures of model-based control during sequential decision making.
Lorenz Deserno, Quentin J M Huys, Rebecca Boehme, Ralph Buchert, Hans-Jochen Heinze, Anthony A Grace, Raymond J Dolan, Andreas Heinz, Florian Schlagenhauf. Proc Natl Acad Sci U S A 2015
121
14

Human Orbitofrontal Cortex Represents a Cognitive Map of State Space.
Nicolas W Schuck, Ming Bo Cai, Robert C Wilson, Yael Niv. Neuron 2016
192
14

Neurons in the orbitofrontal cortex encode economic value.
Camillo Padoa-Schioppa, John A Assad. Nature 2006
872
13

Working-memory capacity protects model-based learning from stress.
A Ross Otto, Candace M Raio, Alice Chiang, Elizabeth A Phelps, Nathaniel D Daw. Proc Natl Acad Sci U S A 2013
227
13

By carrot or by stick: cognitive reinforcement learning in parkinsonism.
Michael J Frank, Lauren C Seeberger, Randall C O'reilly. Science 2004
12

Habits, action sequences and reinforcement learning.
Amir Dezfouli, Bernard W Balleine. Eur J Neurosci 2012
136
12


Cognitive control predicts use of model-based reinforcement learning.
A Ross Otto, Anya Skatova, Seth Madlon-Kay, Nathaniel D Daw. J Cogn Neurosci 2015
81
13

The role of the basal ganglia in habit formation.
Henry H Yin, Barbara J Knowlton. Nat Rev Neurosci 2006
11



The algorithmic anatomy of model-based evaluation.
Nathaniel D Daw, Peter Dayan. Philos Trans R Soc Lond B Biol Sci 2014
80
13

Neural correlates of forward planning in a spatial decision task in humans.
Dylan Alexander Simon, Nathaniel D Daw. J Neurosci 2011
108
11

The role of the dorsomedial striatum in instrumental conditioning.
Henry H Yin, Sean B Ostlund, Barbara J Knowlton, Bernard W Balleine. Eur J Neurosci 2005
652
10

Speed/accuracy trade-off between the habitual and the goal-directed processes.
Mehdi Keramati, Amir Dezfouli, Payam Piray. PLoS Comput Biol 2011
167
10

A specific role for posterior dorsolateral striatum in human habit learning.
Elizabeth Tricomi, Bernard W Balleine, John P O'Doherty. Eur J Neurosci 2009
392
10


Model-based learning protects against forming habits.
Claire M Gillan, A Ross Otto, Elizabeth A Phelps, Nathaniel D Daw. Cogn Affect Behav Neurosci 2015
90
11


Dorsal hippocampus contributes to model-based planning.
Kevin J Miller, Matthew M Botvinick, Carlos D Brody. Nat Neurosci 2017
74
13

The successor representation in human reinforcement learning.
I Momennejad, E M Russek, J H Cheong, M M Botvinick, N D Daw, S J Gershman. Nat Hum Behav 2017
92
10

Retrospective revaluation in sequential decision making: a tale of two systems.
Samuel J Gershman, Arthur B Markman, A Ross Otto. J Exp Psychol Gen 2014
96
9


Of goals and habits: age-related and individual differences in goal-directed decision-making.
Ben Eppinger, Maik Walter, Hauke R Heekeren, Shu-Chen Li. Front Neurosci 2013
64
14


Temporal difference models and reward-related learning in the human brain.
John P O'Doherty, Peter Dayan, Karl Friston, Hugo Critchley, Raymond J Dolan. Neuron 2003
941
9

The reward circuit: linking primate anatomy and human imaging.
Suzanne N Haber, Brian Knutson. Neuropsychopharmacology 2010
9


Reinforcement learning: the good, the bad and the ugly.
Peter Dayan, Yael Niv. Curr Opin Neurobiol 2008
228
9

Decoding subjective decisions from orbitofrontal cortex.
Erin L Rich, Jonathan D Wallis. Nat Neurosci 2016
160
9

Dopamine reward prediction errors reflect hidden-state inference across time.
Clara Kwon Starkweather, Benedicte M Babayan, Naoshige Uchida, Samuel J Gershman. Nat Neurosci 2017
68
13

Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.
Wouter Kool, Samuel J Gershman, Fiery A Cushman. Psychol Sci 2017
72
12

Predictive representations can link model-based reinforcement learning to model-free mechanisms.
Evan M Russek, Ida Momennejad, Matthew M Botvinick, Samuel J Gershman, Nathaniel D Daw. PLoS Comput Biol 2017
85
10

Vicarious trial and error.
A David Redish. Nat Rev Neurosci 2016
161
9

Model-based predictions for dopamine.
Angela J Langdon, Melissa J Sharpe, Geoffrey Schoenbaum, Yael Niv. Curr Opin Neurobiol 2018
49
18

Temporal prediction errors in a passive learning task activate human striatum.
Samuel M McClure, Gregory S Berns, P Read Montague. Neuron 2003
605
8


Co-cited is the co-citation frequency, indicating how many articles cite the article together with the query article. Similarity is the co-citation as percentage of the times cited of the query article or the article in the search results, whichever is the lowest. These numbers are calculated for the last 100 citations when articles are cited more than 100 times.