A citation-based method for searching scientific literature

W Schultz, P Dayan, P R Montague. Science 1997
Times Cited: 4564







List of co-cited articles
777 articles co-cited >1



Times Cited
  Times     Co-cited
Similarity



Mesolimbic dopamine signals the value of work.
Arif A Hamid, Jeffrey R Pettibone, Omar S Mabrouk, Vaughn L Hetrick, Robert Schmidt, Caitlin M Vander Weele, Robert T Kennedy, Brandon J Aragona, Joshua D Berke. Nat Neurosci 2016
333
13

Neuron-type-specific signals for reward and punishment in the ventral tegmental area.
Jeremiah Y Cohen, Sebastian Haesler, Linh Vong, Bradford B Lowell, Naoshige Uchida. Nature 2012
703
12

Learning the value of information in an uncertain world.
Timothy E J Behrens, Mark W Woolrich, Mark E Walton, Matthew F S Rushworth. Nat Neurosci 2007
969
10

Goals and habits in the brain.
Ray J Dolan, Peter Dayan. Neuron 2013
427
10



Prefrontal cortex as a meta-reinforcement learning system.
Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, Matthew Botvinick. Nat Neurosci 2018
121
9

The reward circuit: linking primate anatomy and human imaging.
Suzanne N Haber, Brian Knutson. Neuropsychopharmacology 2010
8

Prolonged dopamine signalling in striatum signals proximity and value of distant rewards.
Mark W Howe, Patrick L Tierney, Stefan G Sandberg, Paul E M Phillips, Ann M Graybiel. Nature 2013
262
8

Model-based influences on humans' choices and striatal prediction errors.
Nathaniel D Daw, Samuel J Gershman, Ben Seymour, Peter Dayan, Raymond J Dolan. Neuron 2011
730
8

Representation of action-specific reward values in the striatum.
Kazuyuki Samejima, Yasumasa Ueda, Kenji Doya, Minoru Kimura. Science 2005
562
8

A causal link between prediction errors, dopamine neurons and learning.
Elizabeth E Steinberg, Ronald Keiflin, Josiah R Boivin, Ilana B Witten, Karl Deisseroth, Patricia H Janak. Nat Neurosci 2013
449
8

Specialized coding of sensory, motor and cognitive variables in VTA dopamine neurons.
Ben Engelhard, Joel Finkelstein, Julia Cox, Weston Fleming, Hee Jae Jang, Sharon Ornelas, Sue Ann Koay, Stephan Y Thiberge, Nathaniel D Daw, David W Tank,[...]. Nature 2019
119
8

Dopamine in motivational control: rewarding, aversive, and alerting.
Ethan S Bromberg-Martin, Masayuki Matsumoto, Okihide Hikosaka. Neuron 2010
7

By carrot or by stick: cognitive reinforcement learning in parkinsonism.
Michael J Frank, Lauren C Seeberger, Randall C O'reilly. Science 2004
7

Tonic dopamine: opportunity costs and the control of response vigor.
Yael Niv, Nathaniel D Daw, Daphna Joel, Peter Dayan. Psychopharmacology (Berl) 2007
623
7

Dissociable dopamine dynamics for learning and motivation.
Ali Mohebi, Jeffrey R Pettibone, Arif A Hamid, Jenny-Marie T Wong, Leah T Vinson, Tommaso Patriarchi, Lin Tian, Robert T Kennedy, Joshua D Berke. Nature 2019
164
7

Human-level control through deep reinforcement learning.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski,[...]. Nature 2015
930
7






Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning.
Michael J Frank, Ahmed A Moustafa, Heather M Haughey, Tim Curran, Kent E Hutchison. Proc Natl Acad Sci U S A 2007
404
6

Whole-brain mapping of direct inputs to midbrain dopamine neurons.
Mitsuko Watabe-Uchida, Lisa Zhu, Sachie K Ogawa, Archana Vamanrao, Naoshige Uchida. Neuron 2012
700
6


Arithmetic and local circuitry underlying dopamine prediction errors.
Neir Eshel, Michael Bukwich, Vinod Rao, Vivian Hemmelder, Ju Tian, Naoshige Uchida. Nature 2015
167
6


Discrete coding of reward probability and uncertainty by dopamine neurons.
Christopher D Fiorillo, Philippe N Tobler, Wolfram Schultz. Science 2003
6

A Unified Framework for Dopamine Signals across Timescales.
HyungGoo R Kim, Athar N Malik, John G Mikhael, Pol Bech, Iku Tsutsui-Kimura, Fangmiao Sun, Yajun Zhang, Yulong Li, Mitsuko Watabe-Uchida, Samuel J Gershman,[...]. Cell 2020
33
18

Reward Processing in Depression: A Conceptual and Meta-Analytic Review Across fMRI and EEG Studies.
Hanna Keren, Georgia O'Callaghan, Pablo Vidal-Ribas, George A Buzzell, Melissa A Brotman, Ellen Leibenluft, Pedro M Pan, Liana Meffert, Ariela Kaiser, Selina Wolke,[...]. Am J Psychiatry 2018
142
5


Striatal circuits for reward learning and decision-making.
Julia Cox, Ilana B Witten. Nat Rev Neurosci 2019
107
5


Phasic firing in dopaminergic neurons is sufficient for behavioral conditioning.
Hsing-Chen Tsai, Feng Zhang, Antoine Adamantidis, Garret D Stuber, Antonello Bonci, Luis de Lecea, Karl Deisseroth. Science 2009
761
5





Modulation of striatal projection systems by dopamine.
Charles R Gerfen, D James Surmeier. Annu Rev Neurosci 2011
914
5



Adaptive learning under expected and unexpected uncertainty.
Alireza Soltani, Alicia Izquierdo. Nat Rev Neurosci 2019
59
8

Reinforcement learning in multidimensional environments relies on attention mechanisms.
Yael Niv, Reka Daniel, Andra Geana, Samuel J Gershman, Yuan Chang Leong, Angela Radulescu, Robert C Wilson. J Neurosci 2015
134
5

Opposite initialization to novel cues in dopamine signaling in ventral and posterior striatum in mice.
William Menegas, Benedicte M Babayan, Naoshige Uchida, Mitsuko Watabe-Uchida. Elife 2017
97
5

Hippocampal Contributions to Model-Based Planning and Spatial Memory.
Oliver M Vikbladh, Michael R Meager, John King, Karen Blackmon, Orrin Devinsky, Daphna Shohamy, Neil Burgess, Nathaniel D Daw. Neuron 2019
44
11

Deep Reinforcement Learning and Its Neuroscientific Implications.
Matthew Botvinick, Jane X Wang, Will Dabney, Kevin J Miller, Zeb Kurth-Nelson. Neuron 2020
25
20


Model-based predictions for dopamine.
Angela J Langdon, Melissa J Sharpe, Geoffrey Schoenbaum, Yael Niv. Curr Opin Neurobiol 2018
49
10


Co-cited is the co-citation frequency, indicating how many articles cite the article together with the query article. Similarity is the co-citation as percentage of the times cited of the query article or the article in the search results, whichever is the lowest. These numbers are calculated for the last 100 citations when articles are cited more than 100 times.