A. Abadie, Semiparametric difference-in-differences estimators, The Review of Economic Studies, vol.72, issue.1, pp.1-19, 2005.

D. Acemoglu, S. Johnson, and J. A. Robinson, The colonial origins of comparative development: An empirical investigation: Reply', American Economic Review, vol.102, issue.6, pp.3077-3110, 2012.

P. Aghion, M. Dewatripont, C. Hoxby, A. Mas-colell, and A. Sapir, The governance and performance of universities: evidence from europe and the us, Economic Policy, vol.25, issue.61, pp.7-59, 2010.

P. Aghion, M. Dewatripont, and J. C. Stein, Academic freedom, private-sector focus, and the process of innovation, The RAND Journal of Economics, vol.39, issue.3, pp.617-635, 2008.

B. Alberts, Overbuilding research capacity, Science, vol.329, issue.5997, p.1257, 2010.

, Agence nationale de la recherche, rapport annuel 2005, 2005.

A. Arora, P. A. David, and A. Gambardella, Reputation and competence in publicly funded science: estimating the effects on research group productivity, in 'The Economics and Econometrics of Innovation, pp.141-176, 2000.

A. Arora and A. Gambardella, The impact of nsf support for basic research in economics, pp.91-117, 2005.

B. Arpino and F. Mealli, The specification of the propensity score in multilevel observational studies, Computational Statistics & Data Analysis, vol.55, issue.4, pp.1770-1780, 2011.

K. J. Arrow, Economic welfare and the allocation of resources for invention, in 'Readings in Industrial Economics, pp.219-236, 1972.

P. C. Austin, Using the standardized difference to compare the prevalence of a binary variable between two groups in observational research, Communications in Statistics-Simulation and Computation, vol.38, issue.6, pp.1228-1234, 2009.

P. C. Austin, An introduction to propensity score methods for reducing the effects of confounding in observational studies, Multivariate behavioral research, vol.46, issue.3, pp.399-424, 2011.

, Bibliography 209

P. Azoulay, J. S. Graff-zivin, and G. Manso, Incentives and creativity: evidence from the academic life sciences, The RAND Journal of Economics, vol.42, issue.3, pp.527-554, 2011.

A. Banal-estañol, I. Macho-stadler, and D. Pérez-castrillo, Team diversity evaluation by research grant agencies: Funding the seeds of radical innovation in academia?, 2018.

S. O. Becker and A. Ichino, Estimation of average treatment effects based on propensity scores, The stata journal, vol.2, issue.4, pp.358-377, 2002.

J. M. Benavente, G. Crespi, L. F. Garone, and A. Maffioli, The impact of national research funds: A regression discontinuity approach to the chilean fondecyt, Research Policy, vol.41, issue.8, pp.1461-1475, 2012.

M. Bertrand, E. Duflo, and S. Mullainathan, How much should we trust differencesin-differences estimates?, The Quarterly journal of economics, vol.119, issue.1, pp.249-275, 2004.

N. S. Board, Science & Engineering Indicators, 2016.

L. Bornmann, L. Leydesdorff, and P. Van-den-besselaar, A meta-evaluation of scientific research proposals: Different ways of comparing rejected to awarded applications, Journal of Informetrics, vol.4, issue.3, pp.211-220, 2010.

L. Bornmann, C. Wagner, and L. Leydesdorff, BRICS countries and scientific excellence: A bibliometric analysis of most frequently cited papers, J. Assn. Inf. Sci. Tec, vol.66, issue.7, pp.1507-1513, 2015.

K. J. Boudreau, E. C. Guinan, K. R. Lakhani, and C. Riedl, Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science, Management Science, vol.62, issue.10, pp.2765-2783, 2016.

D. W. Braben, Pioneering research: A risk worth taking, 2004.

N. Carayol and J. Dalle, Sequential problem choice and the reward system in open science, Structural Change and Economic Dynamics, vol.18, issue.2, pp.167-191, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00279233

N. Carayol, A. Lahatte, and O. Llopis, The right job and the job right: Novelty, impact and journal stratification in science, 2018.
URL : https://hal.archives-ouvertes.fr/hal-02160816

N. Carayol and M. Matt, Individual and collective determinants of academic scientists productivity, Information Economics and Policy, vol.18, issue.1, pp.55-72, 2006.
URL : https://hal.archives-ouvertes.fr/hal-00279197

S. Carley and A. L. Porter, A forward diversity index, Scientometrics, vol.90, issue.2, pp.407-427, 2012.

G. M. Carter, J. D. Winkler, and A. K. Biddle-zehnder, An evaluation of the NIH research career development award, 1987.

D. E. Chubin, E. J. Hackett, and E. J. Hackett, Peerless science: Peer review and US science policy, 1990.

D. Chudnovsky, A. López, M. A. Rossi, and D. Ubfal, Money for science? the impact of research grants on academic output, Fiscal Studies, vol.29, issue.1, pp.75-87, 2008.

W. G. Cochran and D. B. Rubin, Controlling bias in observational studies: A review, The Indian Journal of Statistics, Series A, pp.417-446, 1973.

S. Cole and G. A. Simon, Chance and consensus in peer review, Science, vol.214, issue.4523, pp.881-886, 1981.

P. Dasgupta and P. A. David, Research policy, vol.23, issue.5, pp.487-521, 1994.

T. E. Day, The big consequences of small biases: A simulation of peer review, Research Policy, vol.44, issue.6, pp.1266-1270, 2015.

R. H. Dehejia and S. Wahba, Causal effects in nonexperimental studies: Reevaluating the evaluation of training programs, Journal of the American statistical Association, vol.94, issue.448, pp.1053-1062, 1999.

R. H. Dehejia and S. Wahba, Propensity score-matching methods for nonexperimental causal studies, Review of Economics and statistics, vol.84, issue.1, pp.151-161, 2002.

L. Egghe and I. R. Rao, Study of different h-indices for groups of authors, Journal of the American Society for Information Science and Technology, vol.59, issue.8, pp.1276-1281, 2008.

, European research council, annual report of the erc activities and achievements in, ERC, 2017.

F. C. Fang, A. Bowen, and A. Casadevall, Nih peer review percentile scores are poorly predictive of grant productivity, 2016.

F. C. Fang and A. Casadevall, Grant funding: Playing the odds, Science, vol.352, issue.6282, pp.158-158, 2016.

L. Fleming, Recombinant uncertainty in technological search, Management science, vol.47, issue.1, pp.117-132, 2001.

M. Frölich, Finite-sample properties of propensity-score matching and weighting estimators, Review of Economics and Statistics, vol.86, issue.1, pp.77-90, 2004.

A. Geuna, The changing rationale for european university research funding: are there negative unintended consequences?, Journal of economic issues, vol.35, issue.3, pp.607-632, 2001.

A. Geuna and B. R. Martin, University research evaluation and funding: An international comparison, vol.41, pp.277-304, 2003.

A. Geuna and F. Rossi, The university and the economy: pathways to growth and economic development, 2015.

D. K. Ginther, W. T. Schaffer, J. Schnell, B. Masimore, F. Liu et al., Science, vol.333, issue.6045, pp.1015-1019, 2011.

G. González-alcaide, M. Castellano-gómez, J. C. Valderrama-zurián, and R. Aleixandrebenavent, Literatura científica de autores españoles sobre análisis de citas y factor de impacto en biomedicina, 1981.

R. Guimera, B. Uzzi, J. Spiro, and L. A. Amaral, Team assembly mechanisms determine collaboration network structure and team performance, Science, vol.308, issue.5722, pp.697-702, 2005.

J. Gush, A. Jaffe, V. Larsen, and A. Laws, The effect of public funding on research output: the new zealand marsden fund, vol.52, pp.227-248, 2018.

B. B. Hansen and J. Bowers, Attributing effects to a cluster-randomized get-outthe-vote campaign, Journal of the American Statistical Association, vol.104, issue.487, pp.873-885, 2009.

J. J. Heckman, The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models, pp.475-492, 1976.

J. J. Heckman, Sample selection bias as a specification error, Econometrica, vol.47, issue.1, pp.153-161, 1979.

J. J. Heckman, H. Ichimura, and P. E. Todd, Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme, The review of economic studies, vol.64, pp.605-654, 1997.

D. Hicks, P. Wouters, L. Waltman, S. D. Rijcke, and I. Rafols, The leiden manifesto for research metrics, Nature, vol.520, issue.7548, pp.429-431, 2015.

K. Hirano and G. W. Imbens, Estimation of causal effects using propensity score weighting: An application to data on right heart catheterization, Health Services and Outcomes research methodology, vol.2, issue.3-4, pp.259-278, 2001.

K. Hirano, G. W. Imbens, and G. Ridder, Efficient estimation of average treatment effects using the estimated propensity score, Econometrica, vol.71, issue.4, pp.1161-1189, 2003.

J. E. Hirsch, An index to quantify an individual's scientific research output, Proceedings of the National academy of Sciences, vol.102, issue.46, pp.16569-16572, 2005.

G. W. Imbens, Nonparametric estimation of average treatment effects under exogeneity: A review, Review of Economics and statistics, vol.86, issue.1, pp.4-29, 2004.

G. W. Imbens and J. M. Wooldridge, Recent developments in the econometrics of program evaluation, Journal of economic literature, vol.47, issue.1, pp.5-86, 2009.

J. Ioannidis, K. W. Boyack, H. Small, A. A. Sorensen, and R. Klavans, Bibliometrics: Is your most cited work your best?', Nature News, vol.514, issue.7524, p.561, 2014.

J. P. Ioannidis, More time for research: fund people not projects, Nature, vol.477, issue.7366, p.529, 2011.

B. A. Jacob and L. Lefgren, The impact of research grant funding on scientific productivity, Journal of public economics, vol.95, issue.9, pp.1168-1177, 2011.

F. Jacob, Evolution and tinkering, Science, vol.196, issue.4295, pp.1161-1166, 1977.

B. F. Jones, S. Wuchty, and B. Uzzi, Multi-university research teams: Shifting impact, geography, and stratification in science, science, vol.322, issue.5905, pp.1259-1262, 2008.

J. Kim and M. Seltzer, Causal inference in multilevel settings in which selection processes vary across schools. cse technical report 708, 2007.

L. Langfeldt, M. Benner, G. Sivertsen, E. H. Kristiansen, D. W. Aksnes et al., Excellence and growth dynamics: A comparative study of the Matthew effect, Sci. Public Policy, vol.42, issue.5, pp.661-675, 2015.

G. Laudel, Conclave in the tower of babel: how peers review interdisciplinary research proposals, Research Evaluation, vol.15, issue.1, pp.57-68, 2006.

Y. Lee, J. P. Walsh, and J. Wang, Creativity in scientific teams: Unpacking novelty and impact, Research Policy, vol.44, issue.3, pp.684-697, 2015.

H. C. Lehman, Age and achievement, 1953.

S. G. Levin and P. E. Stephan, Research productivity over the life cycle: Evidence for academic scientists, The American Economic Review, pp.114-132, 1991.

, Bibliography 213

D. Li, Expertise versus bias in evaluation: Evidence from the nih, American Economic Journal: Applied Economics, vol.9, issue.2, pp.60-92, 2017.

D. Li and L. Agha, Big names or big ideas: Do peer-review panels select the best science proposals?, Science, vol.348, issue.6233, pp.434-438, 2015.

F. Li, A. M. Zaslavsky, and M. B. Landrum, Propensity score weighting with multilevel data, Statistics in medicine, vol.32, issue.19, pp.3373-3387, 2013.

L. Li, G. Ding, N. Feng, M. Wang, and Y. Ho, Global stem cell research trend: Bibliometric analysis as a tool for mapping of trends from, Scientometrics, vol.80, issue.1, pp.39-58, 1991.

G. Mallard, M. Lamont, and J. Guetzkow, Fairness as appropriateness: Negotiating epistemological differences in peer review, Technology, & Human Values, vol.34, issue.5, pp.573-606, 2009.

H. W. Marsh, U. W. Jayasinghe, and N. W. Bond, Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability, American psychologist, vol.63, issue.3, p.160, 2008.

J. M. Mcdowell, Obsolescence of knowledge and career publication profiles: Some evidence of differences among fields in costs of interrupted careers, The American Economic Review, vol.72, issue.4, pp.752-768, 1982.

R. K. Merton, Priorities in scientific discovery: a chapter in the sociology of science, American sociological review, vol.22, issue.6, pp.635-659, 1957.

R. K. Merton, The matthew effect in science: The reward and communication systems of science are considered, Science, vol.159, issue.3810, pp.56-63, 1968.

R. K. Merton, The matthew effect in science, ii: Cumulative advantage and the symbolism of intellectual property, isis, vol.79, issue.4, pp.606-623, 1988.

H. F. Moed, UK research assessment exercises: Informed judgments on research quality or quantity?, Scientometrics, vol.74, issue.1, pp.153-161, 2008.

A. Molinari and J. Molinari, Mathematical aspects of a new criterion for ranking scientific institutions based on the h-index, Scientometrics, vol.75, issue.2, pp.339-356, 2008.

T. Möller, M. Schmidt, and S. Hornbostel, Assessing the effects of the german excellence initiative with bibliometric methods, vol.109, pp.2217-2239, 2016.

C. Musselin, La grande course des universités, Presses de Sciences Po, 2017.

R. R. Nelson, The simple economics of basic scientific research, Journal of political economy, vol.67, issue.3, pp.297-306, 1959.

J. M. Nicholson and J. P. Ioannidis, Research grants: Conform and be funded, Nature, vol.492, issue.7427, p.34, 2012.

H. Park, J. Lee, and B. Kim, Project selection in nih: A natural experiment from arra, Research Policy, vol.44, issue.6, pp.1145-1159, 2015.

G. Petsko, Goodbye, columbus, Genome biology, vol.13, issue.5, p.155, 2012.

S. D. Pimentel, L. C. Page, and L. Keele, An overview of optimal multilevel matching using network flows with the matchmulti package in r, 2018.

C. Post, E. De-lia, N. Ditomaso, T. M. Tirpak, and R. Borwankar, Capitalizing on thought diversity for innovation, Research-Technology Management, vol.52, issue.6, pp.14-25, 2009.

I. Rafols and M. Meyer, Diversity and network coherence as indicators of interdisciplinarity: case studies in bionanoscience, Scientometrics, vol.82, issue.2, pp.263-287, 2009.

L. Reijnhoudt, R. Costas, E. Noyons, K. Börner, and A. Scharnhorst, Seed + expand': a general methodology for detecting publication oeuvres of individual researchers, Scientometrics, vol.101, issue.2, pp.1403-1417, 2014.

J. M. Robins, M. A. Hernan, and B. Brumback, Marginal structural models and causal inference in epidemiology, Epidemiology, vol.11, issue.5, 2000.

D. Rodrik, Institutions for high-quality growth: what they are and how to acquire them, Studies in comparative international development, vol.35, issue.3, pp.3-31, 2000.

P. R. Rosenbaum, Optimal matching for observational studies, Journal of the American Statistical Association, vol.84, issue.408, pp.1024-1032, 1989.

P. R. Rosenbaum, Design of Observational studies, 2010.

P. R. Rosenbaum, Optimal matching of an optimally chosen subset in observational studies, Journal of Computational and Graphical Statistics, vol.21, issue.1, pp.57-71, 2012.

P. R. Rosenbaum, R. N. Ross, and J. H. Silber, Minimum distance matched sampling with fine balance in an observational study of treatment for ovarian cancer, Journal of the American Statistical Association, vol.102, issue.477, pp.75-83, 2007.

P. R. Rosenbaum and D. B. Rubin, The central role of the propensity score in observational studies for causal effects, Biometrika, vol.70, issue.1, pp.41-55, 1983.

P. R. Rosenbaum and D. B. Rubin, Reducing bias in observational studies using subclassification on the propensity score, Journal of the American statistical Association, vol.79, issue.387, pp.516-524, 1984.

P. R. Rosenbaum and D. B. Rubin, Constructing a control group using multivariate matched sampling methods that incorporate the propensity score, The American Statistician, vol.39, issue.1, pp.33-38, 1985.

D. B. Rubin, Estimating causal effects of treatments in randomized and nonrandomized studies, Journal of educational Psychology, vol.66, issue.5, p.688, 1974.

U. Schmoch and T. Schubert, Sustainability of incentives for excellent researchthe german case, Scientometrics, vol.81, issue.1, pp.195-218, 2009.

P. E. Stephan, The Economics of Science, Journal of Economic Literature, vol.34, issue.3, pp.1199-1235, 1996.

P. E. Stephan, How economics shapes science, 2012.

P. E. Stephan, R. Veugelers, and J. Wang, Blinkered by bibliometrics, Nature, vol.544, issue.7651, pp.411-412, 2017.

D. Trapido, How novelty in knowledge earns recognition: The role of consistent identities, Research Policy, vol.44, issue.8, pp.1488-1500, 2015.

G. Travis and H. M. Collins, New light on old boys: Cognitive and institutional particularism in the peer review system, Technology, & Human Values, vol.16, issue.3, pp.322-341, 1991.

B. Uzzi, S. Mukherjee, M. Stringer, and B. Jones, Atypical combinations and scientific impact, Science, vol.342, issue.6157, pp.468-472, 2013.

A. F. Van-raan, Comparison of the hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups, scientometrics, vol.67, issue.3, pp.491-502, 2006.

J. Van-steen and M. Eijffinger, Evaluation practices of scientific research in the netherlands, Research Evaluation, vol.7, issue.2, pp.113-122, 1998.

C. S. Wagner, J. D. Roessner, K. Bobb, J. T. Klein, K. W. Boyack et al., Approaches to understanding and measuring interdisciplinary scientific research (idr): A review of the literature, Journal of informetrics, vol.5, issue.1, pp.14-26, 2011.

L. Waltman and M. Schreiber, On the calculation of percentile-based bibliometric indicators, Journal of the American Society for Information Science and Technology, vol.64, issue.2, pp.372-379, 2013.

J. Wang, Y. Lee, and J. Walsh, Funding model and creativity in science: Competitive versus block funding and status contingency effects, Research Policy, vol.47, issue.6, pp.1070-1083, 2018.

J. Wang, R. Veugelers, and P. E. Stephan, Bias against novelty in science: A cautionary tale for users of bibliometric indicators, Research Policy, vol.46, issue.8, pp.1416-1436, 2017.

S. Wessely, Peer review of grant applications: what do we know?, The lancet, vol.352, issue.9124, pp.301-305, 1998.

S. Wuchty, B. F. Jones, and B. Uzzi, The increasing dominance of teams in production of knowledge, Science, vol.316, issue.5827, pp.1036-1039, 2007.

J. Zhang, Q. Yu, F. Zheng, C. Long, Z. Lu et al., Comparing keywords plus of WOS and author keywords: A case study of patient adherence research: Comparing Keywords Plus of WOS and Author Keywords, Journal of the Association for Information Science and Technology, vol.67, issue.4, pp.967-972, 2016.

J. R. Zubizarreta and L. Keele, Optimal multilevel matching in clustered observational studies: A case study of the effectiveness of private schools under a large-scale voucher system, Journal of the American Statistical Association, vol.112, issue.518, pp.547-560, 2017.

J. R. Zubizarreta, R. D. Paredes, and P. R. Rosenbaum, Matching for balance, pairing for heterogeneity in an observational study of the effectiveness of for-profit and not-for-profit high schools in chile, The Annals of Applied Statistics, vol.8, issue.1, pp.204-231, 2014.

H. Zuckerman, R. K. Merton, . K. ;-r, and . Merton, List of Tables 1 Evolution of the funding amount according to the source of funds for Universities and Public Research Organizations (PRO), A Sociology of Age Stratification: Aging and Society, pp.497-559, 1972.

, 2 Number of submitted projects and number of partner×project according to the application date for the final sample

, Distribution of the number of partner×project according to the ANR program and to the application date for our final sample, p.25

, Variables description used with the whole (or PI only) sample (for Tables 1.8 to 1.13)

. .. , Variables description used with projects teams (Table 1.16), p.31

, Descriptive statistics on the groups of the non-participants, the not-granted applicants and the granted ones for the whole sample of researchers, p.35

, Descriptive statistics on the groups of the not-granted applicants and the granted ones with the sample of teams

, Factors that influence the probability to submit a project for the whole sample (Mean marginal effects reported)

, Factors that influence the probability to receive a grant for the whole sample (Mean marginal effects reported)

, Factors that influence the probability to submit a project for the PI (Mean marginal effects reported)

, Factors that influence the probability to receive a grant for the PI (Mean marginal effects reported)

, Factors that influence the probability to submit a project for the directed programs (Mean marginal effects reported)

, Factors that influence the probability to receive a grant for the directed programs (Mean marginal effects reported)

. .. , Factors that influence the probability to submit a project for the nondirected program (Mean marginal effects reported), p.58

, Factors that influence the probability to receive a grant for the non-directed program (Mean marginal effects reported)

, Factors that influence the probability to receive a grant for the sample of teams (Mean marginal effects reported)

. .. , Robustness Check 1: Factors that influence the probability to submit a project for the whole sample (Mean marginal effects reported), p.67

. .. , Robustness Check 1: Factors that influence the probability to receive a grant for the whole sample (Mean marginal effects reported), p.68

. .. , Robustness Check 2: Factors that influence the probability to submit a project for the whole sample (Mean marginal effects reported), p.69

. .. , Robustness Check 2: Factors that influence the probability to receive a grant for the whole sample (Mean marginal effects reported), p.70

. .. , Robustness Check 3: Factors that influence the probability to receive a grant for the whole sample (Mean marginal effects reported), p.71

, Factors that influence the probability to submit a project for the whole sample (Heckman Probit coefficients reported)

, Factors that influence the probability to receive a grant for the whole sample (Heckman Probit coefficients reported)

, Factors that influence the probability to submit a project for the PI (Heckman Probit coefficients reported)

, Factors that influence the probability to receive a grant for the PI (Heckman Probit coefficients reported)

. .. , Factors that influence the probability to submit a project for the nondirected programs (Heckman Probit coefficients reported), p.77

, Factors that influence the probability to receive a grant for the non-directed programs (Heckman Probit coefficients reported)

, Factors that influence the probability to submit a project for the directed programs (Heckman Probit coefficients reported)

, Factors that influence the probability to receive a grant for the directed programs (Heckman Probit coefficients reported)

, Factors that influence the probability to receive a grant for the sample of teams (Probit coefficients reported)

, Descriptive statistics on outcome variables among non-applicants, unsuccessful applicants and granted ones, before and after the reference year, p.96

, Descriptive statistics on selection variables for non-applicants, not granted applicants and granted ones, by program type (directed or non-directed), p.98

, Synthesis of the eight specifications of the propensity score model, p.100

, List of covariates used for the propensity score estimation in the reference model, for directed programs

, List of covariates used for the propensity score estimation in the reference model, for non-directed programs

, Parallel path test: Difference-in-differences estimates of the mean effect of treatment on various production variables with the reference specification of the selection stage, p.102

, Average treatment effect of receiving an ANR grant on publication outcomes and collaboration behaviors (the three years after treatment against the three years before)

, Differentiated effects of receiving an ANR grant on outcomes according to non-directed versus directed funding schemes (the three years after treatment against the three years before)

, 115 2.10 Differentiated effects of receiving an ANR grant on outcomes according to age dummy: below the median age (43) versus over the median age (the three years after treatment against the three years before), Differentiated effects of receiving an ANR grant on the average and maximum, p.116

. .. , Differentiated effects of receiving an ANR grant on publication outcomes according to the position in the citation distribution at the time of funding (the three years after treatment against the three years before), p.117

, The final sample of 31,081 researchers, the applicants and the granted, p.120

, Applications from the public sector and funded partner×project by years (in # and amounts in million euros)

. .. , Applications from the public sector and funded partner×project by year for our final sample (in # and amounts in million euros), p.122

, Researchers' and professors' status in the three samples, p.123

. .. , 123 2.17 Number of applications by year and by program

, Allocation of the ANR applications into large disciplines for our final sample127

, Allocation of the ANR granted applications into large disciplines for our final sample

, List of sections assigned to our final sample of researchers, according to the research institute

, List of covariates used for the propensity score estimation in the model 5 (non-directed programs)

, List of covariates used for the propensity score estimation in the model 5 (for directed programs)

, Groups of sections , given the classification of the research institute (used in model 3 and model 4)

. .. , 143 2.27 Parallel path test : Difference-in-differences estimates of the mean effect of treatment on various production variables, Groups of sections (used in model 5 and model 8), p.145

, Parallel path test : Difference-in-differences estimates of the mean effect of treatment on various production variables, p.146

. .. , Average treatment effect of receiving an ANR grant on outcomes (the three years after treatment against the three years before), p.154

, Differentiated effects of receiving an ANR grant on outcomes according to non-directed versus directed funding schemes (the three years after treatment against the three years before)

, Differentiated effects of receiving an ANR grant on outcomes according to age dummy: below the median age (43) versus over (the three years after treatment against the three years before)

. .. , Differentiated effects of receiving an ANR grant on publication outcomes (next three years against previous three years) according to the investigator's role (principal investigator vs. partner coordinator), p.157

, Differentiated effects of receiving an ANR grant on publication outcomes according to the year of funding, on the main production variables (next three years against previous three years)

, Differentiated effects of receiving an ANR grant on publication outcomes according to the scientific discipline of the applicant (the three years after treatment against the three years before)

, Number of newly retrieved publications at each step and the number of related researchers

, 2 Number of distinct labs and number of professors and researchers among treated individuals and among potential controls

, Description of the individual-level covariates used in the matching process and the laboratory-level covariates

, Standardized difference of means between treated and control groups, before and after the matching

, Mean treatment effect estimated in the three years following the treatment assignment, and in the three next years, by difference-in-differences, p.195

. .. , Comparison of means between the treated researchers in the matched sample (matched) and the whole sample of treated (all)

, Standardized difference of means between treated and control groups, before and after the matching

, Mean treatment effect estimated in the three years following the treatment assignment, and in the three next years, by difference-in-differences, p.203

. Eu-countries, . Uk, and J. Us, Gross domestic expenditure (GERD) on R&D for several

, Standardized bias (in %) associated with each explanatory covariate in the original unmatched sample and in the weighted sample for the directed (left graph) and non-directed (right graph) programs, using the estimated inverse probability of treatment weights

, 104 2.3 Density and box plot of the estimated propensity scores before and after weighting by the inverse probability of treatment weights for the nondirected programs, Density and box plot of the estimated propensity scores before and after weighting by the inverse probability of treatment weights for the directed programs

, Yearly scientific outcomes of the funded professors and researchers (red solid line) and of their controls (blue dashed line) with respect to the funding year (t = 0)

, blue dashed line) who applied to the two funding schemes: directed (o marks) and non-directed (× marks) with respect to the funding year (t = 0), Yearly scientific outcomes of the funded professors and researchers (red solid line) and of their controls

, Age histograms for the total population, the applicants and those funded, vol.122

. .. , Histogram of the number of applications for all programs (top graphs) and by type of program (directed or non-directed, bottom graphs), p.124

. .. , Histogram of the number of grants for all programs (top graphs) and by type of program (directed or non-directed, bottom graphs), p.126

. .. , Intensity of the participation in directed and non-directed programs at the specialties level (for sections with more than 25 researchers), p.129

, Histogram of the size of the laboratories (number of tenured researchers or professors) in the three samples

, List of Figures 223

, Density and box plot of the estimated propensity scores before and after matching with the 5 nearest neighbors method for the directed programs, p.149

, Density and box plot of the estimated propensity scores before and after matching with the kernel method for the directed programs, p.150

, Density and box plot of the estimated propensity scores before and after matching with the 5 nearest neighbors method for the non-directed programs151

, Density and box plot of the estimated propensity scores before and after matching with the kernel method for the non-directed programs, p.152

, Standardized bias (in %) associated with each explanatory covariates in the original unmatched sample and in the matched sample for the directed (left graph) and non-directed (right graph) programs with the nearest neighbors method

, Standardized bias (in %) associated with each explanatory covariates in the original unmatched sample and in the matched sample for the directed (left graph) and non-directed (right graph) programs with the kernel method153

, Comparison of scores on three indicators, comparing correct vs. retrieved measures for the professors and researchers in the benchmark, p.164

, In the right graph, article counts are adjusted for coauthorship, Relative number of publications of the Core Bordeaux IdEx community (research clusters)

, In the right graph, the proportion of such top papers is considered instead of the mean, Relative number of top cited articles (top 10%) in each field of the Core Bordeaux IdEx community (research clusters), p.179

. .. , Relative h-index of the Core Bordeaux IdEx community (research clusters), p.179

, Average novelty of articles published in the Core Bordeaux IdEx community (research clusters)

, Relative diversity of the citing sources of the publications of the Core Bordeaux IdEx community (research clusters). The left graph makes use of the Simpson diversity index while the right graph use the Shannon index, p.180

, Fractional Polynomial estimates of the relative number of publications of the Core Bordeaux IdEx community (research clusters)

, In the right graph, the proportion of such top papers is considered instead of the mean, Fractional Polynomial estimates of the relative number of top cited articles (top 10%) in each field of the Core Bordeaux IdEx community (research clusters)

. .. , In the right graph, the h-index is adjusted for age, Fractional Polynomial estimates of the relative h-index of the Core Bordeaux IdEx community (research clusters)

, Fractional Polynomial estimates of the average novelty of articles published in the Core Bordeaux IdEx community (research clusters), p.182

, The left graph makes use of the Simpson diversity index while the right graph use the Shannon index, Fractional Polynomial estimates of the relative diversity of the citing sources of the publications of the Core Bordeaux IdEx community

, before and after the treatment. In the right graph, article counts are adjusted for coauthorship, Histograms of the relative number of publications of the Core Bordeaux IdEx community

, In the right graph, the proportion of such top papers is considered instead of the mean, Histograms of the relative number of top cited articles (top 10%) in each field of the Core Bordeaux IdEx community (research clusters)

, Histograms of the relative h-index of the Core Bordeaux IdEx community (research clusters), p.183

, Histograms of the of the average novelty of articles published in the Core Bordeaux IdEx community (research clusters), p.183

, The left graph makes use of the Simpson diversity index while the right graph use the Shannon index, Histograms of the relative diversity of the citing sources of the publications of the Core Bordeaux IdEx community

, Adjusted by # coauthors) of the Core Bordeaux IdEx community (research clusters, in blue) and for their selected controls (in red), Fractional polynomial estimates of the number of publications

, -year window, and adjusted by # coauthors) obtained by yearly publications of the Core Bordeaux IdEx community (research clusters, in blue) and for their selected controls (in red), Fractional polynomial estimates of the number of citations