false
Catalog
Research Grant Writing Webinar Series
Observational Studies - Video
Observational Studies - Video
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I'm actually one of the more junior faculty members on this really nice panel of members giving their insights about research. I actually went through the AUG Scholars Program two years ago and found it very successful and so was actually able to get some grants funded. And this lecture here today is a lecture on observational studies, which I'm sure is a topic that people have reviewed in the past, and I'm hoping in this lecture to kind of share some of my thoughts on observational studies, and I'm hoping that this falls in some of my grant selections, so the difference between designing a great research project that you can complete as a fellow versus designing a great grant project that people can get excited about funding. So the outline of our lecture today is we'll just go over how to answer these questions in research. So how do you build these inferences about what's going on or what is causing a specific disease if we don't have an experimental design, and then just some of those errors or pitfalls that can occur along the way, and then we'll discuss some of the observational study design. What's not going to be covered in this lecture is randomized controlled trials, controlled experiments that are looking at animals or bench research experiments. We're not going to cover sensitivity and specificity or kind of quality of evidence or strength of recommendations that come out of systematic reviews or guidelines, guideline reviews, those types of things. So when we look at study designs, they fall into two basic groups, and the groups are descriptive studies and then there's analytical studies. And in analytical studies, there's a comparator, so we're comparing something. And the two different types of analytical studies are observational and experimental. And experimental are those gold standard studies that we're looking for, which include randomized controlled trials or, like we talked about before, kind of the bench research experiments. And then there's observational studies, where you specifically have two groups and you're either looking at exposures or outcomes that you at least have a comparative group so you have that hypothesis-driven research. When you take a big step back on designing a research idea or if you start going and getting more experience with research, there's the big concept of what you're trying to prove. So I want to prove this, and that's the concept. But then there is, how do we measure that? And that's the operationalization of what we do in research. And this applies to both how we design our research studies, but especially when you shift over into, I'm going to design a grant and I'm going to try to get other people excited about funding my research idea. How am I going to sell, I'm going to answer this question in a great way, and that goes into the operationalization of things. And then how do we make these valid inferences from what we find? So our goal, again, is we have this research question and it's supposed to answer the truth in the universe. And our hope is that we find a cause that leads to an effect, and that's giving us insight on what's going on. But often, when we try to do research studies, we're not actually showing cause and effect. We're actually showing there's an exposure and then I see a disease. And so we're measuring an association, but we haven't gotten to that causal relationship yet. It's very important to understand what you need to do to build a causal pathway. And there's some very specific guidelines in what needs to be shown before you can say that this is causing this disease process. One of the things that you really want to know is, did the exposure, so did the exposure to something like radiation, come before a disease happened, like lymphoma or leukemia down the line? And when you're looking at these causal inferences, of course you want to show the association, and that's the thing that's the easiest sometimes for us to show in our research studies. The other thing we want to show is a dose-response relationship. And my radiation example is a very concrete example of that, because as your exposure to radiation increases, you can show that a person's risk of a cancer later on can increase. And again, that goes into biologic plausibility. People have done these experiments back, or this type of research back, 1950s, 1960s, 1970s, that have really gotten into, okay, so we saw this exposure, which is x-rays, and then we saw this outcome, which is cancer. And we saw strong associations, we saw a dose-responsible relationship, but does it make sense? Is there anything that we can find that shows in biology or in nature this makes sense? Because sometimes we'll find associations that don't make any sense at all, and is that really leading us in a good direction for our research? And then the final one is other explanations should also be considered, that maybe it's not the radiation, maybe it's something else. And so you need to have a healthy skepticism of all of your research ideas, because you want to make sure that you're getting to that truth and not being led in a misleading direction. So again, going back to the goal, we're looking at our research question, and we want it to reflect what's really, really going on in nature, the truth in the universe. And so what we do after we have our research question is we need to start designing, so we have to build a study plan. But then, after we build the study plan, we actually have to implement it and actually get it done in a real study. So there's a design approach, and the design really should come ahead of what you do in your study, hopefully. And then the implementation should actually follow a really nicely planned design. And that's sometimes a great thing about grant writing, is it really makes you design the right study to make meaningful research, rather than trying to design a really feasible study to get done real quick. When you're doing your design, you implement your design, you have your actual study, and then you want to look at your findings in the study. And then this gets back to, can you explain your findings? So you infer what you're finding in your study, and then do these findings in your study actually reflect what you want to portray as what's really going on in this disease process, or the big truth in the universe question? So I'm just going to kind of talk about one study that I had that I actually started off very, very simple. So I was just doing this as a really simple research idea that I really wanted to be able to get done. And for feasibility, I wanted to use the patients in my clinic. And how I changed that to kind of reflect a little bit more, have a really better population, in terms of making that a grant that could be funded that could be successfully funded. So when you have your research question, you have to answer two things. There's a target population, so who do you really want to study? What disease are you looking at? And what's the phenomenon? What are you interested in showing? And so for me, I was interested in looking at postmenopausal women. And I was specifically interested in studying vulvovaginal symptoms, or symptoms of vaginal atrophy in postmenopausal women. And this went way, way, way back to my fellowship. When I was in fellowship, I wanted to show that if you treated women's incontinence, that their vulvar symptoms get better. And so I went out to look to see, how can I prove this? And I couldn't find any validated psychometric questionnaire that I could use to measure that. And so because I couldn't find any questionnaire that would measure it, then I had to take a step back and say, well, what am I going to do now? And that kind of led me down the road of, this is a need. There's definitely vulvovaginal symptoms and vaginal atrophy going on. And what can we do to study it? And so the first thing that happens in your study design, or your study plan, is you have to say, well, who am I going to sample? And for the feasibility study, so I want to get this done and I don't have any funding, I was looking at women who came into my clinic who were presenting for an annual exam. So postmenopausal women who are coming into a gynecologist. And this is becoming more and more rare as we have different guidelines on pap smears and things like that. And so you can already see that women coming into a gynecologic exam who are postmenopausal are already a fairly selected population. So yes, is that feasible? Yeah, it's great because I have those patients in my office right there, ready to be studied. But it doesn't necessarily mean that it's the best population of women to study if I'm just looking at all postmenopausal women, because not all postmenopausal women come in to see a gynecologist. And so you have the study plan, and the original study plan was looking at women at their annual exam. And then you think about, well, how do I introduce more errors? And if you think about women coming in for their annual exam and then you approach them for participation in a research study, there's probably a little bit of difference between the women who are willing to participate in a research study and women who aren't. And so you have the actual subjects that you recruit, and then that's a subset of women in the clinic. And then you have the actual measurements that you're trying to measure. And you have, along the way, all of these potential errors that you can introduce into your study design. One of the errors is, do women remember the symptoms accurately? And what symptoms do you actually ask about? How many episodes have they had in the last 30 days, those types of things. So in this example, what actually ended up happening is, along the way, as I was learning more about the research and getting more experience with writing for grant funding, we actually had money to then recruit women who weren't coming in for a gynecologic exam, so we could actually go out into the community and recruit women who weren't actively seeking treatment. And that removed a lot of the bias from our samples. And then we also developed an instrument that would actually measure these symptoms. But this slide sums up, along the way, when you're designing a study, either just a study that you're trying to get published as a research project, but also a study that you're trying to get somebody excited about in funding. If they're going to give you money, how are you going to get that great sample and really remove those errors from your research? The next thing we're going to talk about is threats of validity of inference. And there's three major threats to validity, chance, bias, and confounding. So chance is a random error due to an unknown source of variation that distorts the sample and the measurement in either direction. So to reduce this error, you can increase your sample size, and you conduct statistical analysis to see if what you found is due to chance, or if what you found is really due to you're finding an answer to what you're looking at or that truth in the universe. Bias is different. Bias is a systematic error that distorts the sample and measurements in one direction. And so there's lots of different types of bias that can be introduced by the way you design your study. And so going back to our example that I was telling you before, when I was talking about bias being introduced from my study design where I was recruiting women out of a gynecology clinic versus recruiting women who were asymptomatic who weren't going into the gynecologist, there's a difference in that bias. Selection bias is sometimes one of the examples is the Neiman bias, which is the incident prevalence bias. And this is much more common in different types of research of things that are highly fatal. So one example is if you have a really bad cancer where everybody dies after three months of being diagnosed with the disease, you're not going to notice that that disease is highly prevalent in the population. And so if you're looking at different incidences and prevalences of diseases in populations, that's one of the biases. There's also different types of information bias where you measure things wrong. And this goes back to recall bias. If you ask a woman who has a child with a birth defect, if she took anything in her first trimester of pregnancy, she's much more likely to remember the aspirin or the Tylenol that she took than a woman who has a very healthy baby. There's observer bias. And then there's misclassification bias. Did you classify your subjects in the right category before you began your comparative analysis? So there are ways to reduce bias. And this goes back to how you design your study. Being very strict and very thoughtful on your inclusion and exclusion criteria is essential. This goes back to basic IRB writing, writing great inclusion and exclusion criteria when you design your study. And it also is really critical in your grants to really specifically say who you're studying and who you're going to include in the study and who are you going to exclude. Blinded measurements are critical to reduce bias. And we know this from a lot of our outcomes research that it's good to not know what surgery the person had or who the surgeon was. Those types of things, having another person do certain measurements after surgery can reduce your bias. Standardizing your measurements. So if you're asking about symptoms using some sort of way that you're asking about symptoms in a standard way, hopefully with a psychometrically validated questionnaire, you can train and certify observers. And you can certainly do this in your research study where you train a research assistant to do all of the measurements in a certain way. And you can refine or automate your instrument that's doing the measurements. Confounding is another threat to validity. And confounding is an external factor that is associated with a predictor variable and an outcome variable. And ways to reduce confounding are to anticipate and measure the potential confounders. And then there's also some very... When you're looking at certain types of studies, you can do some very specific things up front. And so you can match. You can say, well, I know that what I'm studying is dependent on age, so I'm going to match women who have the disease and don't have the disease, and I'm going to age match them. Or I'm going to restrict looking to a very certain population. I'm only going to look at women 40 to 50 years old. Or stratification. The other thing that we do a lot in research these days is we conduct multivariable regression analyses to account for the confounding. And so we have these adjusted odds ratios to look for the confounding. And the other thing that you will see with confounding in newer grants, and especially when people are looking in your statistical analysis and your innovation sections, is people have started to talk about propensity scores as well to account for confounding. So this is... We've gone over our study designs, or we've gone over a lot of the things that we talk about errors in our study designs. And then we are just going to briefly cover some of the observational studies. One of the real key things to do before you start any research project is design a really good research question that talks about exposure and outcome. Exposures can be risk factors for diseases. They can be protective factors. But exposure should precede the disease that you're studying, and exposure should influence the disease onset and progression. And if you can think of exposures, you can usually classify these into two-by-two tables to kind of summarize your research or kind of get you on the right track as you're designing things. And so we're all used to this type of two-by-two blocks where we have people get exposed or not exposed, yes, no, and people have the disease, yes or no. In a cross-sectional study, we're looking at a population and we sample that population that this is a descriptive study. And so we have some people who have exposures and have the disease. We have some people who have exposures and don't have the disease. And then we have the opposite. We have no exposure, no disease. And then we have some people who have the disease who don't get exposed. And this is just a basic exploratory study looking at, hey, I'm wondering if incontinence is related to obesity. And of course, you're going to find people in the population who have both incontinence and obesity, but you're also going to have very thin women who don't have incontinence or very thin women who do have incontinence. You're going to want to look at those in terms of a cross-sectional study. But the problem with cross-sectional studies is it's a chicken or an egg thing. Which one really came first? Because you don't have a temporal timeline. You didn't measure which came first. It could be that because women with urinary incontinence leak when they work out, they work out less. And it's actually the urinary incontinence that came first. And you just don't know that if you're conducting a cross-sectional study. To look at that, there are cohort study designs. And cohort study designs select the population and then look at the exposures and the outcomes. In a prospective cohort study, you then look at, you select your population, you see who's exposed, who's not exposed, and then you see who has the disease and who doesn't have the disease. So the prospective cohort study is looking at the present and then following people to the future. Prospective cohort studies are very expensive to conduct and sometimes a little bit difficult and tricky to keep all of these people in the same study and not have dropout or loss to follow up. But it's a true longitudinal study. There's retrospective cohort studies that get a little tricky, where you'd select the population in the present and then you look back to the past and see if they were exposed. But what's key about retrospective cohort studies is you're not selecting the population based on whether or not they have the disease. Advantages of cohort studies, it's the best way to ascertain an incidence and a natural history of a disorder. Cohort studies are a way to evaluate multiple outcomes of a single exposure. And they're very useful in evaluating rare exposures. Disadvantages, they have selection bias. It's very inefficient for rare diseases. And we already talked a little bit about how they're very expensive. But we've gotten some beautiful information about very large cohort studies and some of those are the Framingham Heart Health Study and the Nurses Health Study. Case control studies are the opposite. So case control studies, we select in the present based on cases and controls based on whether or not patients have the disease or don't have the disease. And so it's a completely different selection of subjects in your case control study. Case control studies have some advantages. They're very efficient for rare diseases. They're very efficient for diseases with long induction or latency periods. So maybe you were exposed in utero, but then you developed DES, after the DES exposure, you developed adenomyosis and clear cell carcinoma of the vagina 20, 30 years later. And so that long induction makes a case control study ideal for that type of exposure. And the other thing that case control studies are able to do is they're able to evaluate multiple exposures from a single outcome or a single disease, yes or no, a person has a disease or they don't have a disease. Because you're not measuring the temporal association, it may be unclear. So you're going back to that chicken or egg question. You cannot determine the incidence rates based on case control studies, because remember you selected based on whether or not people had the disease or not. So you have no idea what the incidence rates are in the population. They're more susceptible to bias because selection is so critical to do these studies well. And if the exposure is rare, the design is inefficient, although it is an efficient design for rare outcomes. And that concludes the study for now. There's a couple of extra slides at the end of this to review that just briefly cover relative risk and odds ratios. And another thing to look up is the strobe and the concert reporting guidelines, because these are critical in terms of designing your study and making sure you measure everything that you need to measure, so that when you go to report your findings...
Video Summary
The video features a junior faculty member who shares insights on observational studies, highlighting the difference between designing a research project as a fellow and designing a grant project that attracts funding. The lecturer discusses study design, focusing on descriptive and analytical studies. Analytical studies are further categorized into experimental and observational studies. The importance of operationalization in research design and grant writing is emphasized. The lecturer also explores the goal of research to identify cause and effect relationships, discussing the need for exposure to precede disease and the importance of dose-response relationships and biologic plausibility. The lecture covers threats to validity, such as chance, bias, and confounding, and suggests strategies to reduce bias, including blinding and standardization. Different types of observational studies, such as cross-sectional, prospective cohort, and case-control studies, are explained, highlighting their advantages and disadvantages. The lecturer concludes by mentioning the significance of reporting guidelines and providing resources for further information. No credits are mentioned.
Meta Tag
Category
webinars
Category
professional concerns
Category
research
Session
182383
Keywords
observational studies
study design
grant project
cause and effect relationships
observational study types
reducing bias
×
Please select your language
1
English