All we have been talking about up to this point are numbers. Times Cited is a number. Impact Factor is a number. H-Index is a number. Numbers are cold. We need to include the context, the background, the caveat, etc. We have already seen that without the context, the background, the caveat, these numbers can be easily misinterpreted. For example, imagine two researchers submit the relevant metrics for tenure review, or for whatever reason. One of them is in a field where citing each other is the culture, while the other one is in a field where there are not a lot of citing activities, because it's a relatively new and small field. Without this context, these numbers can easily mislead decision makers. The one with higher numbers may be just average in his or her field, while the one with lower numbers could be leading a whole new field and doing ground-breaking research. But the fundamental reason for not only reporting numbers is that the notion of impact is way more than the quantitative metrics and their context. And that's why we need to consider the big picture of impact and be more descriptive and narrative in presenting impact. In other words, we need to tell impact stories. Consider this examples, back in the days when it was generally believed that smoking was good for you, even though research regarding the health consequences of smoking could be cited a lot, but the impact as described by the number of times cited, or the author's h-Index is not nearly as powerful as the fact that there is a new law banning smoking in public places based on the research. Another example, you did some research on injuries resulting from repetitive motion or lifting, and that caused the carpentry professional association to change their best practice guidelines. Now that's real health benefit to hundreds of thousands of people. That's research translated into practice. This real world impact is way more powerful than what your h-Index can measure. So the point here is impact stories should go beyond citation count based quantitative metrics, and consider the real world impacts in the big picture. As I explained in the introductory video of this series of tutorials, we like to use this tree analogy to illustrate the big picture of impact. Again the root system here represents the research output and activities. The impact that the quantitative metrics measure belongs to the root system, but only make up a small part of it. Research output and activities cover a lot more than publications and their citations. One thing that is typically not covered by the traditional metrics is collaboration activities. Who are your collaborators? Do you have a lot of them? Are they geographically dispersed? Are they from different types of organizations? Are they from different disciplines and fields? The answers to those questions will help illustrate the breadth and depth of your impact. The stem of the tree here represents the translational process from research output and activities to real world impacts, and the leaves and branch system obviously represents the real world impacts themselves, such as a change in safety guidelines for carpenters, the new law banning smoking in public places. As you can see, quantitative metrics do not tell the whole story of impact. And a small part in the root system here is not indicative or predictive of what's out there above the ground. But how would one find out about these real world impacts? What we need is a checklist to help us capture them, such as the Becker model from Washington University, which is a framework for tracking diffusion of research outputs and activities to locate indicators that demonstrate evidence of biomedical research impact. Here is a diagram illustrating the Becker model. In the middle here, you have the research output and activities, which, as we have explained previously, encompass more than publications and their citations, which diffuse into real world impacts in five distinct pathways. In each of the pathways, there are individual indicators of impact that can help you capture your impact. This is the reference document for the Becker model indicators of impact. Let's take a closer look. The indicators of impact work like a checklist to help you not miss anything important. We are currently in the "Advancement of Knowledge" pathway, and these are the indicators of impact in this section. For example, if your research caused curricular changes, and curriculum guidelines refer to your research study as being significant, or use your study as recommended or background reading for more information, or if your research findings are cited in teaching materials, those count as impact in the advancement of knowledge. In the "Clinical Implementation" pathway, for example, if your research resulted in a clinical effective approach in the management and treatment of a disease, disorder, or condition, that's impact in "Clinical Implementation". If an ICD code is implemented as a result of your research study, or if your study resulted in a diagnostic procedure, a new medical device, a new drug, etc, that's considered impact in this category. My carpenter best practice guidelines change example would be in the "Community Benefit" pathway, as it is research study findings leading to public awareness or identification of risk factors of a disease, disorder, condition or behavior. I already gave an example on legislation and policy regarding smoking ban. An example of economic benefit would be that your research resulted in the lowering of cost or the increase of cost-effectiveness of a treatment, thus saving billions of dollars for patients and tax-payers. Obviously we won't have time in this tutorial to go into details of all those 300 plus indicators of impact in the Becker model, but I would highly encourage you to read about the framework and all of its indicators, which not only helps you write more compelling impact stories, but also can serve as a guideline for young and fledging researchers to enhance and keep track of their research impact. The key for telling impact stories based these indicators of impact is to prove how specific research studies resulted in, or led to, a real world impact. If you think a curricular guideline change is based on your research studies, how do you know that? If you think an ICD code was implemented based on your research, where is the evidence? If you think a particular law changed based on your research study, how can you prove that? In some cases, it does not require extensive effort to look for such evidence. For example, if the curricular guideline specifically cites your work, or if the teaching materials specifically list your studies in the recommended readings list, then that counts as evidence. If a congressional document related to a new law cites your work, or if you presented a testimony before a legislative body regarding your work, that counts as evidence of impact. In other cases, it can be quite convoluted to find direct relationship between specific studies and a real world impact. Extensive investigative work may need to be conducted to unveil that relationship. Such effort may include, but is not limited to, contacting relevant individuals or organizations, extensive searches in the literature and media reports, etc. Another difficulty for telling stories of real world impact is that research output and activities can take a long time to diffuse into real world impacts. On average, it can take 17 years for biomedical research to be translated into identifiable real world impact, if at all. So depending on where you are in your career path, you may not have any yet. Telling impact stories can be a daunting task, but the Becker model and its list of indicators of impact provide you with a framework to guide you through the process. Of course this model is supposed to be an open model. It cannot possibly exhaust all indicators of real world impact in all areas. You should add your own indicators of impact to this list, if you find any that's not there yet.