Back in 2019, U.S. News & World Report announced an intention to add a faculty scholarship metric to their law school ranking mechanism. There were proponents and opponents, and many questions about how the sausage would be made, including, but not limited to, which source(s) of data to use, how to adjust for interdisciplinary work (or for faculty who author books (for which we have fewer sources of citation analytics)), how to adjust for gaming the system by self-citation to oneself or one’s colleagues (the former is often tracked, the latter, less so), correcting for differences among more-cited and less-cited topical areas and, perhaps most importantly, how to adjust for traditionally marginalized authors, namely non-white, non-male members of the academy.
Last week, it became public knowledge through this Bloomberg Law article that U.S. News had dropped its plans, though no rationale was given (the Bloomberg Law article was corrected following its initial publication, suggesting that the authors themselves did not know of U.S. News‘s decision to pull back until the publication of their article forced U.S. News to reach out to Bloomberg to correct the record; later paragraphs in the article read as though the scholarship rankings are still forthcoming; the article was posted at 3:00 AM on August 19, and the disclaimer at the end reads: “(The first paragraph of Aug. 19 story was updated to reflect that U.S. News was only considering a new ranking. Second paragraph was added to indicate that U.S. News decided not to pursue this new ranking.)”)
So where does this leave us? Personally, I think USNWR was really hoping to go with a “one source fits all” approach, and HeinOnline is the best for pure law review citations, but relatively light on interdisciplinary and no real help at all with books. And that’s the problem in legal publishing: we have no “best” source for citation analytics. Before Google Scholar and HeinOnline started tracking citations, the legal academy mostly focused on SSRN data; in fact, some were so focused on SSRN that there were faculty who were opposed to their schools hosting institutional repositories out of fear that it would pull their SSRN stats down (this has been debunked). Today, Google Scholar has the best universe of data, in terms of coverage, thanks to being publisher-agnostic and the strength of their book-scanning project; the problem, however, is that it is a *very* messy universe, with many cites counted numerous times (and other errors, like picking up index and table of contents entries as “publications”). There would need to be significant data cleanup and disambiguation and Google has no real incentive (yet?) to do that.
Scopus and Web of Science would be better than Hein on interdisciplinary data and they cover a small universe of books (nowhere near Google, though), but their legal holdings are slight: a few hundred law reviews and about 50% of them are commercially-published European journals. They both started as STEM and peer-reviewed-only, which is why US law reviews are not well-covered, but they’ve relented a little on that; they’d be more viable options if they relented all the way.
And while SSRN’s intention is to be a source for working papers, many authors use it as a personal repository; the rub is that SSRN will not post a final published version unless the author can show that she, and not the publisher, has copyright. If the publisher retains copyright, the final version cannot be posted.
What all of this boils down to is that with no one source, we should be using all of them, whether analyzing scholarly impact, compiling metrics for hiring and promotion decisions, or for claiming and enhancing (where possible) author profiles. Some Northwestern Law faculty have created their own Google Scholar author profiles and that does help separate some of the bad data out (if you have not, you can start here and click on “My Profile” (you’ll need to establish a Google account if you don’t have one)); everyone who has published an article that is in HeinOnline has an author profile there, (whether they’ve claimed it or not). And yes, we should post whatever we want to SSRN, subject to the above caveat.
Thus, while the jury is still out on the value of quantitative scholarly metrics, the fact remains that they are here to stay, and with that in mind, we should strive to make sure our work is properly recognized and visible.