The best known quantitative metrics of impact at the journal level is probably the impact factor. As I described in the introduction of this series of tutorials, this is how the impact factor is calculated. If a journal has an impact factor of 31 in 2013, it means that articles published in that journal in 2011 and 2012 are, on average, cited 31 times in 2013. So the impact factor has a one year census period--in this case, 2013, and two previous years as the "target window"--in this case, 2011 and 2012. The census period is the time when the citing articles are published, and the target window is when the cited articles are published. The impact factor is a proprietary metric of Thompson-Reuters, so the only place you can find the impact factor of a journal is in Thompson-Reuters' Journal Citation Reports. Here it is... so, if you want to know the impact factor of a certain journal, you can just type in the title of the journal in the search box right here. For example, "The American Journal of Human Genetics" ... has an impact factor of 10.987 in 2013. If I want to know the previous year's impact factor for this journal here they all are. Remember, the impact factor has a one year census period, and a two previous years as the target window. So if the journal had an impact factor of 11.202 in 2012, it means that, on average, articles published in the journal in 2010 and 2011 were cited 11.202 times in 2012. I can also select a subject area with select categories... For example, if I want to see a list of journals in the field of oncology, sorted by impact factor, I can select that, and click submit at the bottom, here. So, here are the journals in journal citation reports for that field. So, the journal CA-A has a whopping 162.5 for impact factor in 2013, and the list goes all the way down to .076. JCR also provides some additional metrics to account for some of the perceived weaknesses of the impact factor. For example, there is the impact factor without journal self cites which excludes citations by articles published in the same journal... There's the five year impact factor... which expands the target window from two years to five years. For journals in subject areas where citation activity continues to rise through several years, this metric allows more of their total citation activity to be included. The immediacy index is the average number of times an article is cited in the year it is published. That, of course, is problematic for journals that publish annually late in the year. I'm going to skip these, and move to the end of the list... the Eigenfactor and Article Influence scores. The Eigenfactor score also increases the target window from two to five years, and excludes citations by articles published in the same journal. What makes it unique is that it also considers the importance of the source of the citation. So, citations from more influential journals will be weighted more than those from less influential ones. The idea is that not all citations were created equal. The technical term here is "Eigenvector centrality", hence the name, Eigenfactor. Eigenvector centrality is a measure of the influence of a node in a network. It assigns relative scores to all nodes in the network based on the concept that connections to high-scoring nodes contribute more to the score of a node than connections to low-scoring nodes. Therefore, an Eigenvector centrality-based metric uses the entire network of citations instead of just the immediate citations to evaluate the importance of a journal, because the score of a citation depends on the scores of those citations immediately connected with it, which, themselves, depend on the scores of other citations in the network. The article influence score measures the average influence of a journal's articles. It is calculated by dividing a journal's Eigenfactor Score by the number of articles in the journal, normalized as a fraction of all articles in all publications. The mean article influence score is one--a score greater than one indicates that articles in the journal have above-average influence... a score less than one indicates that articles in the journal have below average influence. Notice that, despite its name, article influence score is not an article measure--instead, it measures the importance of a journal. Another big player in providing journal-level metrics of impact is Elsevier, and their citation database Scopus. They use a different of metrics--the SNIP, the SJR, and the IPP. This page explains them all. One of the often criticized weaknesses of citation-based impact metrics is that they do not consider the fact that you are more likely to cite and be cited in some subject fields than others. The SNIP is an attempt to fix that. The SNIP is the Source Normalized Impact per Paper. It measures the contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a citation is given higher value in subject areas where citations are less likely and vice versa. SJR, or the SCImago Journal Rank, consideres not only the subject field, but also the quality and reputation of the journal that provides the citation. So this is kind of similar to Eigenfactor in that they're both based on the Eigenvector Centrality Concept. The IPP, or Impact per Publication measures the ratio of citations per article published in the journal. So, this is similar to the impact factor, but IPP uses a three-year target window, instead of two, or five. IPP claims that this is the optimal time period to accurately measure citations in most subject fields. So, if a journal has an IPP of 31 in 2014, it means articles published in 2011 through 2013 are on average cited 31 times in 2014. Journalmetrics.com is the website where you can find all of this information... it also allows you to find the scores for a particular journal. For example, if I type in "the American Journal of Human Genetics" ... I can see the SNIP, SJR, and IPP scores for these census years. Scopus also has the "Scopus Journal Analyzer", which gives you a more visualized view of these metrics. I can access that from the Scopus homepage... just click on "Compare Journals". So if I type in "the American Journal of Human Genetics"... ... I can see a line chart for each of these metrics. This is the SJR chart... the IPP chart... and the SNIP chart. I can switch between table view and chart view. I can also use this to compare journals on these metrics so that if I type in another journal, let's say "Nature Genetics"... you can see it shows the line charts of the two journals together so that you can compare them. Google Scholar uses its own metrics for measuring journal level impact. It borrows the idea of H-index, which we will discuss in the next video, and we will come back to Google's journal level impact metrics at that point. Journal level metrics, by definition, measures the impact of journals. They should be used for that purpose, and for that purpose only. For example, evaluating a group of journals when deciding where to submit your manuscripts. However, in reality, there often used in ways they're not designed for. This is most obvious with the impact factor. Because the impact factor is the best-known metrics of impact out there, it does create some confusion. People tend to report impact factors of the journals they publish in to demonstrate their article's impact, or their own impact. Arguments, such as the following, are very common: "My article was published in a high impact factor journal, so it must have high impact!" "I publish my articles in high impact factor journals, so I must have high impact!" This can be true, but not always. There's no guarantee, because the impact factor, as we have seen, is designed to measure the impact of a journal and not an individual author or an article. All the metrics we talked about in this video are citation-count based. Different metrics try to correct some of the perceived weaknesses of others. For example, trying to weight different subject fields differently, trying to weight citations differently according to its source. However, most metrics are still fundamentally based on citations, so they suffer a most important weakness--citations do not necessarily mean positive influence. They don't tell the whole story. They're simply what we can measure. Citations themselves are not the point. The point is, what the author's intent to say when they link paper A to paper B. Even within the world of citations there are also caveats to consider. This help page of the journal citation reports advises that you should use JCR wisely. Citation data are not meant to replace informed peer review. Careful attention should be paid to the main conditions that can influence citation rates, such as language journal history and format, publication schedule, and subject specialty. For example, journals published in non-English languages, or using non-Roman alphabets, may be less-accessable to researchers world-wide, which can influence their citation patterns. Another example: review articles tend to be cited more, so a journal that publishes a lot of review articles, tends to have a higher impact factor. So what's the bottom line? The bottom line is that different tools provide different metrics of impact at the journal level, and almost all of them are citation count based. You should use these metrics as they are designed for--that is, to evaluate the impact of journals, and not articles or authors. And you should bear in mind that any citation count based metrics should be looked at in context. Citations do not tell the whole story, and many factors can influence citation rate. However you're using these metrics, it is good practice to provide more information on the metric you use. What does the metric measure? How is the score calculated? Which tool did you pick to get the score from? When did you get the score? Up to this point we talked about article-level metrics and journal-level metrics. In the next video, we'll be focusing on author-level metrics. See you next time.