Now that you've learned the basic techniques in building database search strategies, you will soon realize that it can be a time-consuming task. Let's say your research has to do with "premature infants", and you're going to run a thorough search for the concept. As usual, your first step is to find all relevant controlled vocabulary terms, such as MeSH headings. Remember this table? You're going to fill in this cell first. You can use tools such as OvidSP MEDLINE, EMBASE, or PubMed to identify these potentially relevant terms. In the final end, your controlled vocabulary search strategy could look like this... and you know that's not enough. You also need to do text word searches for the concept, to compliment your controlled vocabulary search, and catch anything that is not properly indexed in the database, or to search in a database that does not support controlled vocabulary at all. You have this cell in the concept table to fill in. So, after some work, your text words search strategy for the concept could look like this... and now you need to "OR" those two searches together--something like this: And this is only one concept in your search. A typical systematic search usually involves at least three to four concepts. So, as you can see, this can get very complicated really quickly. Now let's consider another scenario--in writing your review, you typically indicate in your protocol some inclusion and exclusion criteria regarding study design. For example, you may decide that you're only going to look at the studies designed to answer therapy questions, such as randomized control trials, and after some work, you might come up with a couple of search strategies for that. "But, wait," you say. "I can't be the first one, or the only one, doing research involving the concept 'premature infants', and I'm certainly not the first one, or the only one, looking for randomized controlled trials studies. What did other people do? Why reinvent the wheel?" The good news, is that there are a lot of "prefab search strategies" you can use in your own strategies. They're known as "filters" or "hedges", or sometimes called "optimal search strategies" "optimal search filters", "quality filters", "clinical queries", et cetera. Some hedges, known as "topic hedges", are designed to pull database records relevant to a certain topic, such as your topic of "premature infants", while others, known as "methodological hedges", or "study design hedges", are designed to find articles with a particular study design, such as randomized controlled trials. "Great!" you say. However, as you should know by now, nothing here is 100% plug-and-play. There are always strings attached. Just because others have used a hedge in previous literature searches, doesn't necessarily mean it's a good one. They may not work well for your project, for a variety of reasons. Some hedges are expert informed--they came out of brainstorming sessions of experts in the field, who basically decide on the terms that best represent their concept. The usefulness of these hedges can not be determined until they are scientifically validated. And here's how a hedge is typically validated: a group of references in a database known as "the gold standard set", are specified as the scope of the validation. Subject experts will manually identify each of the articles in this set as either "relevant"... ...or "irrelevant"... Then the hedge to be validated will be applied in a search against the gold standard set in the database. And of course, not all references in the set will be retrieved. So naturally, there will be four groups of references here: the true positives--relevant references that are retrieved, the false positives --irrelevant references that are retrieved, the false negatives --relevant references that are not retrieved, and the true negatives --irrelevant references that are not retrieved. The numbers of references in those groups are then used to calculate the sensitivity of the hedge, which is the number of true positives as a percentage of all relevant articles. And the specificity of the hedge, which is the number of true negatives as a percentage of all irrelevant articles. For example, if a hedge sensitivity of 99%, it means it retrieves 99% of all the articles known to be relevant in that gold standard set, but it may also retrieve a lot of other references that are known to be irrelevant. Similarly, if a hedge has a specificity of 99%, it means that it filters out 99% of all the articles that are known to be irrelevant in the gold standard set, but it may not retrieve all the articles that are known to be relevant. So if you want your search to be absolutely exhaustive, and don't mind filtering out the false positives yourself, choose a filter with max sensitivity. On the other hand, if you want your search to be extremely focused, and you do not absolutely need to have everything, choose a filter with high specificity. In most cases, you are going to end up choosing something in the middle, one that has a balance of sensitive and specificity. As I said before, the hedge you are going to use may not have been validated at all. This does not automatically mean that they are bad hedges. It just means that you need to do more to evaluate it before using it in your own searches. If the hedge IS validated, you should look at the validation results and make a choice based on its sensitivity and specificity. Even though a hedge is validated, there are still a number of things you should consider before using it. Hedges are usually validated with a specific database, for example, OvidSP MEDLINE. This means that the validation results only apply to that database. If you need to use the same hedge in a different database, you will need to "translate" the hedge to make it work in the new database. This is necessary because the new database may require a different syntax. It may use a different controlled vocabulary system, or it may not support any controlled vocabulary system at all, as we have seen from previous tutorials. As we saw from the validation process, the choice of the "gold standard set" is critical. The validation results could be different if a different gold standard set is used, and you have to be careful whether this will affect your search or not. For example, if the choice of the "gold standard set" in the validation process is subject based--that is, al the references in the set are from the same subject area (such as psychiatry), the validation results may only be useful in that subject area. If you are searching in a different subject area (such as education), your search may not be as good as the validation results indicate. Another thing to look at before using a validated hedge is the time when it was last validated. This is especially important for topic hedges, such as "premature infants". This is because, as science advances, there may be new ways of describing the concept, and new controlled vocabulary terms may be added. Over the years, database search algorithms could also change, which may cause an old hedge to be less useful. Where do you find existing hedges? Perhaps the best-known methodological hedges are the "clinical queries" developed and validated by Haynes and colleagues at McMaster University. This page tells you exactly how these hedges were developed and validated. For example, these are the hedges for both Ovid MEDLINE and PubMed. These clinical queries are built into OvidSP MEDLINE and PubMed. In OvidSP MEDLINE, they are available as an "additional limit" option--right here. So for example, I can pick the maximum sensitivity hedge for pulling therapy-type articles from the database. In PubMed, clinical queries are linked off the home page--right here. So if I put in my search terms... ...I can then pick a catagory, and whether I want to apply the more sensitive/broad hedge. or the more specific/narrow hedge. And you can see that the numbers of results retrieved are very different. The "PubMed Search Strategies Blog" has a large number of contributed hedges, which you can evaluate and use in your own searches. In fact, our example of the "premature infants" concept came from here. The InterTASC Information Specialists' Sub-Group Search Filter Resource is a "collaborative venture to identify, assess and test search filters designed to retrieve research by study design or focus. The Search Filters Resource aims to provide easy access to published and unpublished search filters." But, as they specifically say here, "inclusion of a search filter is not an endorsement of its validity or a recommendation." For example, if I am looking a hedge for "quality of life" studies, this table shows you the relevant hedges they have collected. You have the database the hedge is developed in on the left, and where the hedge is published on the right. So let's say I am interested in this particular one for PubMed--click here... get the .pdf... and this paper has all the details about the hedges and their validation results. The Scottish Intercollegiate Guidelines Network (SIGN) publishes on their website the search filters they use for their research. Notice that they specifically mention here that these may "provide less sensitive searches than used by other systematic reviewers such as The Cochrane Collaboration, but enable the retrieval of medical studies that are most likely to match SIGN's methodological critera." So for example, here is the filter SIGN uses in Ovid MEDLINE, EMBASE, and EBSCO CINAHL. In this tutorial, we looked at filters and hedges, which are "prefab search strategies" to help you retrieve results from databases. They are very useful when they are used appropriately, but just as everything else, you should always evaluate a hedge before using it... read about its validation process... look at its sensitivity and specificity scores. What database is the hedge developed and validated in? Can you use it for the subject area you are going to search in? When was the hedge developed--would it still be valid? As usual, a good resource for getting advice on using search filters and hedges is your medical librarian or other information professionals trained in conducting systematic searches. That's it for today--I'll see you in the next video.