The Lift 010 | Evaluating the Effects of Supervision with Dr. Lisa Britton
An ABA Technologies Academy Podcast
1.0 BACB, 1 Supervision
This podcast episode describes the importance of actively evaluating the effects of our supervisory practices. We discuss several different sources of data and provide specific strategies for how to collect and analyze them. We also cover what to do once you have the data. Finally, we discuss the need to not only engage in this type of evaluation but to expressly teach trainees about its importance and how to take a structured approach to carry out such an evaluation.
What you’ll learn in the course and be able to do afterward
- Attendees will be able to identify at least two risks and two benefits associated with evaluating the effects of supervision.
- Attendees will be able to identify the four primary sources of data for the evaluation.
- Attendees will be able to describe strategies for collecting relevant data for the evaluation.
"This course is recommended for BCaBA, BCBA, and students of behavior analysis who will be or are currently receiving supervision"
The book focuses on the importance of strong relationships and teaching higher-order skills throughout any supervisory endeavor. The authors provide a conceptually sound set of supervision practices that will guide the actions of those who aspire to become better supervisors or mentors at any point in their careers.
Dr. Linda LeBlanc (00:08):
Welcome everyone, to this episode of The Lift. We're gonna be talking about evaluating the effects of supervision. This is Linda.
Dr. Tyra Sellers (00:16):
Dr. Linda LeBlanc (00:18):
We are gonna be joined by our guest, Dr. Lisa Britton today. I have known Lisa for well over 20 years and count her as a dear friend and colleague. We're so happy to have her with us. She's a BCBA and the owner of Britton Behavioral Consulting. She got her master's degree and PhD from the University of Nevada, Reno, which is where I first met her. She was working with Dr. Jim Carr, who I also know. Her primary focus is on providing behavior support to students with special education needs and also providing effective remote supervision for field work hours. If you haven't read it, you should. She's published a fantastic book titled “Remote field work supervision for BCBA trainees” with her co-author Matt Cicoria, who also has a podcast. Welcome, Lisa. We're so happy to have you with us.
Dr. Lisa Britton (01:19):
Oh, thank you. I'm so happy to join you guys.
Dr. Linda LeBlanc (01:23):
Lisa, we have a quote for this chapter, and it says, “If you haven't measured something, you really don't know very much about it,” by Karl Pearson. This chapter's really about trying to measure something that a lot of people think of as pretty nebulous.
Dr. Tyra Sellers (01:42):
That is true. I just want to jump in and say, I think the requirement that we need to evaluate the effects of our supervision makes so much sense, but apparently it freaks a lot of people out.
Dr. Lisa Britton (01:56):
I think that's a really good point. I think the word nebulous is a very accurate one. I think that it can be really challenging, and so we feel like there's this obligation to collect the data, but then what is it that we're supposed to do with the information that we gather and how is that going to impact our behavior moving forward? I can't wait to have this conversation.
Dr. Linda LeBlanc (02:19):
Assuming we actually get data and don't just go with our own intuition and observation, which can be a little faulty. Let's get us started with one of the themes throughout the book; A focus on self-reflective practice, and of course, also building strong collaborative relationships, as well as teaching a variety of critical skills. That notion of evaluating the effects of your supervisory practice, to me that goes right back to self-reflection. It would never occur to you to evaluate the effects of your supervisory practice, if you weren't reflecting on, “How is this going? What am I doing?” Being able to even describe that, and I think I could be wrong about this, but we may well have been in a period where most people didn't take a particularly systematic approach to evaluating the effects of their supervision. They were doing the best they knew how to do, and hopefully seeing some good effects and likely being convinced by those good effects. Hopefully now we can think of that as the old school way to do it, but I think there are a lot of risks with that approach. I did take that approach for a long, long time. All of those risks were true for me. What are some of the risks that worry you when people take that approach?
Dr. Lisa Britton (04:03):
Oh, great question. Obviously, there's a risk certainly for the trainee, but also a risk for the supervisor, for future trainees, as well as the field as a whole. We run the risk that stakeholders are going to make the assumption that behavior analysis is ineffective, and they may choose a different approach and it might not be evidence based. It's just really important for us to ensure that we're evaluating our supervision practices and that we're really giving our trainee the best skills moving forward. I think on the flip side, in terms of benefits, not only does the trainee benefit, but we benefit as well as supervisors. I can't emphasize enough how much I've improved as a supervisor by really looking at the performance of my trainees and how I am building their skill sets or not, and what I need to do different accordingly. I just can't wait to see what kind of supervisor I am, 10 years from now, because hopefully I will be much better than I am today.
Dr. Tyra Sellers (05:24):
So well put and I think you nailed it in terms of the multifaceted risks associated, right? Risks to yourself, not developing appropriate skills, or maybe strengthening skills that aren't that effective, risk to your trainee. They may not develop appropriate clinical repertoires, or they might not develop appropriate future supervisory skills or worse, that they might develop defective skill sets that could be dangerous and impact the profession and to stakeholders as well. I think that there's a direct risk to clients too, especially if we're thinking about trainees, people accruing their field work experience hours, they're working directly with clients and so substandard supervision, or even okay supervision that you're not actually keeping an eye on to make sure that you're getting the results you want. It stands to reason that the trainee could be engaging in non-optimal behavior with clients and clients could be impacted. Does that freak out both of you?
Dr. Linda LeBlanc (06:41):
[Laughing] It freaks me out, yes, but I will say the flip side of that benefit being, “Do it,” and if you do it, you not only avoid those risks, but you also learn more and, Lisa, as you were talking about, “I can't wait to see what I'm gonna be like in 10 years.” That resonates with me in that a whole lot of my continued learning growth and development, particularly in the last 5 to 10 years, has been based on efforts to become a better supervisor, reflecting on why I make the clinical decisions that I make and how I can convey that to other people. To me, it has kept the field interesting, and that keeps me excited and learning and growing. Even though I'm busy, it's my best shield against burnout. As long as I feel like I'm still getting better and still have something to offer and still learning, I can keep doing this stuff and I think that's a benefit. The other benefit is that we have these biases, that we have to recognize, are there. The most core, for me, is I wouldn't be supervising that way if I didn't think that was a pretty good way to supervise. I might be dead wrong, [Laughing] it might be wildly ineffective, but the fact that I'm behaving that way speaks to a little bit of it, at least implicit bias, it that that's the way to do it, or that that's likely to work based on my history. If you're not directly evaluating the effects, you'll never find out that that's not a particularly good way to do it. What are some of the other kinds of biases that either of you have seen that you might bring to the table?
Dr. Lisa Britton (09:02):
A slightly different way of looking at biases is we might make some inaccurate assumptions about our trainees in terms of how skilled they are and what it is that we need to be focusing on. Either somebody might be able to verbalize in a very effective way, a skillset that may not actually be there, or conversely, we might, because somebody is very quiet, make some assumptions about need areas that actually aren't there. If we're not doing an assessment of the trainee skills, then we run the risk of not addressing a need area because we assume it's not there or working on a skill that we could have been using that time for something else. That's something that's really scary to me. We are wasting time and our supervision time is just so valuable.
Dr. Linda LeBlanc (10:06):
And limited, so we gotta make the most of it.
Dr. Tyra Sellers (10:07):
Absolutely, and so critical. I think for me, it's very similar, just that idea that I could have some sort of bias in some way that maybe someone isn't making progress because of some characteristic or some factor that I am ascribing to them, or I'm somehow tacking versus, I'm a variable in this equation as well and have I been looking, not only at the outcomes, which I'm saying this person isn't making progress, but then the inputs? It's not just that trainee by themselves. What about my teaching strategies could be non-optimal for that person, and if I made a change, they could have a different outcome. I think quite similar to what you two expressed. I think what stumps people, first of all, just stepping back. I think having the conversation about the risks and the benefits is really important because I think for individuals that don't actively engage in evaluating the effects of their supervision, it may be because they just haven't stopped to really think about why it's so important. That's the foundation of the conversation. I think the next piece that stumps folks is, “Well, what do I look at?”
Dr. Linda LeBlanc (11:35):
What do I measure?
Dr. Tyra Sellers (11:36):
Exactly. What sorts of things like you said, Linda. If I feel like it's going okay, if I feel like things are on track, that should be enough because I'm experienced or what have you. We all know, even those of us with a lot of experience, that's not always true. Let's talk a little bit about what some of the things are that we personally have used to measure the outcomes or the impact of our supervisory practices that maybe folks listening could grab and say, “Oh yeah, that's easy.” For example, for me, I like to keep an eye on how many appointments are missed. How many of my meetings are canceled or does my trainee show up late for, and I know that that doesn't seem like that doesn't have anything to do with, “Did they master this thing you taught them?” No, but it might speak to the health of my supervisory relationship and the health of my supervisory relationship will directly impact the effectiveness of my supervision. That's an easy thing to track. I would review fairly often, “How often was that person late? How often did they cancel?” If it was a pattern, it gave me something to start looking into. That's one thing I think people might not think about linking to the effects of their supervision. That's an easy one, right? Just put a note in your calendar.
Dr. Lisa Britton (13:08):
Yeah. I think that's a really good point and along the same lines is how did you feel when that cancellation occurred? [Laughing] Were you disappointed or were you excited? Taking note of your reaction to those type of things to assess that relationship as well. Are there some avoidance behaviors that maybe are occurring on both sides and what can we do to fix that?
Dr. Tyra Sellers (13:38):
Yes. [Laughing] Absolutely. For listeners, the obvious place to go if you're using a competency-based approach to your supervisory practices, you should be able to go to a spreadsheet or document that you're keeping that would say, “I introduced this content area or this skill on this date and it was mastered on this date or it wasn't mastered. I had to continue to go over and over.” Those are the easy things that we could go to, right? Look at the acquisition, the maintenance, the generalization of the skills, and the knowledge that you're trying to impart to your trainees. That one seems easy. You could have a percentage of task list items mastered, or if you see that things are equal difficulty level, that people are mastering them faster. It's interesting that when I speak to people and when some colleagues and I did a study, we were asking questions about what you measure and how you measure it, people really felt like, “I wouldn't even know to start with measuring the outcomes of my supervision. I don't even know what to look at,” but you would look at the same kinds of things you do with client programming, right? Like percent mastered, rate of mastery, need for booster sessions, failure to maintain, failure to generalize. I know you do a lot of specific individualization in your work, Lisa. Do you have some specific things that you look at?
Dr. Lisa Britton (15:16):
All of the above. [Laughing] Those are some really great things to track, and it takes time to develop those systems. I think it's really important for people to not feel like, “I don't have these things, so I'm just not gonna do it.” Pick one thing and develop a system around that one thing. Once you have that in place, then you can introduce another thing.
Dr. Tyra Sellers (15:40):
I'm so sorry to interrupt you but thank you for saying that. The last thing we want to do is say, “You should have all of this, and it should be perfect right now.” None of us have perfect systems, so thank you so much for saying that. Just pick a place and get started and that will do wonders for the world.
Dr. Lisa Britton (16:00):
Absolutely. I think another piece that we can look at is client outcomes. Typically, our trainees are working with clients and the ultimate goal is building the skill set of their clients. Let's look at client data as a measure of the effectiveness of our supervision process. Are they building their skills in terms of skill acquisition? If they've conducted a functional behavior assessment and introduced a behavior plan, are we seeing an appropriate reduction in those problem behaviors and so forth? I think we can get at a lot of those indirect measures of our supervision practices, but it's an important measure, nonetheless.
Dr. Tyra Sellers (16:55):
I just wanted high five Lisa, because indirect measures are okay too.
Dr. Linda LeBlanc (16:59):
They absolutely are and sometimes it's indirect, but just barely. If there's a specific thing that you are working with your supervisee, trainee, and really all of these ideas are equally applicable to RBTs that you're working with. They don't have to be in coursework for you to be having positive effects with them. If what you're working with them on is having more naturalistic teaching, having more engaging, fun sessions, and you see clients smiling and laughing and responding more quickly, being more engaged and having more initiations of responses, shorten their latencies to respond, then I would suggest you are producing an effect on that supervisee’s behavior, that's then directly observable as an effect on that client. Some of them are gonna be a little bit more indirect, but some of them can be really quite direct.
Dr. Lisa Britton (18:07):
I agree. Good point.
Dr. Tyra Sellers (18:10):
Yeah. I just want to talk, for a second, about things like affect or indices of therapeutic relationship, whether or not the supervisor is taking a look at how their trainee or, to Linda's point their staff, their RBT, or their BCBA, or maybe you're a senior clinician and you're supervising another BCBA. If they're looking at that person's interaction with a client or a caregiver, or they're thinking about their own interaction with that person, I think there's a tendency for some people to sort of “coo-coo” or turn their nose up at the idea that you can come up with at least some modicum of an objective rating or something like that of affect. Back and forth conversation versus someone just sitting there passively and not being engaged or smiling, or whatever. Hanging out a few minutes after the meeting, just to say, thank you, or whatever it's gonna be. I wonder if maybe we can spend just a few minutes talking about that, because I think it's something that we don't do very well at as behavior analysts. I think if we're not doing well with it, we're probably not teaching it well to other people. Especially…
Dr. Linda LeBlanc (19:38):
That notion of the person that you're supervising, how responsive and engaged and interactive they are with you in the supervision session.
Dr. Tyra Sellers (19:50):
Yeah, or how engaged and interactive they are with other people that you are hoping to impact or how those individuals are engaging with your trainee. Maybe it's a parent and you're watching your trainee do an interview. How is that other person? What are some of the things that you all think people should easily be able to take anecdotal notes or maybe come up with a rating scale? Let's just give listeners some ideas.
Dr. Linda LeBlanc (20:18):
I will say, that's something I never take data on, unless it feels like a problem. When it's going well, you're just enjoying the experience. You're talking back and forth. You can tell you're both happy and engaged and it never occurs to me to take explicit data on it then. There are times when it's like, “Ooh, I need to pull things out of this person,” and I wish that they were able to, or that they did initiate some of the topics of conversation or state, a little bit, what they'd like to get, or what have you. I think then it very much occurs to me to start measuring something. That's not to say that I shouldn't measure it the other times. I'm just saying, this is the real world and that when it's going real well, it's almost the background for me. It just feels, “Yep. It's good. It's right. This is enjoyable.” This person's clearly learning and getting things from this and telling me what they need me to give them. When it's not there, then I start thinking about those as things that I'd like to see increase in frequency.
Dr. Tyra Sellers (21:39):
I love that and it, because I think use something similar, but it made me think really at the end of that good interaction, I should be documenting somewhere. Even if I'm not taking explicit data throughout maybe just a plus at the bottom of that agenda meeting, “That one was a thumbs up.” That way at the end of the meeting, I'm doing a post debrief with myself. If I find that I'm hesitant to give it a plus, then maybe I can start taking those data before a bigger problem is developing. Do you know what I mean? So, I get it. Don't take data if it's not useful, but maybe some sort of check-in just to remind yourself.
Dr. Lisa Britton (22:23):
Along the same lines, one of the things that I like to take note of is those meetings that I am so excited for when I see that one on the calendar and it energizes me each and every time, and then I want to tact what it is that's taking place within that meeting and how to reframe other meetings to replicate that with other people. Are there some better expectations that I can put forth as a result of the information that I'm getting from that particular meeting? Is there something I need to do differently in terms of my level of preparation or something like that? I think that's an important piece as well. When something's especially good, you want to replicate that.
Dr. Linda LeBlanc (23:16):
And take data on your own behavior.
Dr. Tyra Sellers (23:19):
Yes, and I love that, Lisa. It’s so linked to the idea of this continued self-reflective practice as a supervisor and that evaluating the effects of your supervision isn't just something you do once a quarter. It's something you're doing all the time. That's such a cool idea.
Dr. Lisa Britton (23:42):
I think another thing to look at in terms of affect and things like that is going along the lines of what Linda was talking about in terms of, “Does the client approach this person? How is their instructional control? Is there something that they need to do more of?” In terms of pairing with reinforcement in making themselves a conditioned reinforcer, and that can be a conditioned reinforcer for the client, for the parent, for other stakeholders, for RBTs, and that list can go on and on.
Dr. Tyra Sellers (24:15):
Yep. I, 100% agree. I think sometimes watching a trainee or a supervisee interact with their peers or interact with caregivers is really telling. Are the caregivers leaning in and interested, asking questions? Are their peers doing the same kind of thing? Is there, “I don't quite know how to measure this,” but can you detect or discriminate that there's sort of a mutual, positive regard between those individuals? I think that's really important because to be effective, we have to have good interpersonal skills. We have to be able to engage with folks. Gone should be the days when all supervision occurred, either in the supervisor's office or the supervisor just watching the client, programming interactions. Whenever you can capture those other interactions that just happen more naturally, or maybe aren't quite perfectly linked to a task list item. That's sometimes when you get really critical information about strengths or areas of growth for your trainees, supervisees.
Dr. Lisa Britton (25:28):
That's a place to really put in the point that most of the supervision that I do is remote supervision, to be completely honest. It’s the biggest disadvantage of that approach, when you have that in person supervision and you are working shoulder to shoulder, and you see that in the moment situation, and you're able to reinforce the things that you want to reinforce and correct the things that need to be corrected, you're gonna have a better opportunity to do that.
Dr. Tyra Sellers (26:03):
Yeah, that's a really good point. I wonder what the two of you think about doing formal surveys of other individuals. For example, doing a satisfaction survey of caregivers or something like that. Yes, you're asking about your trainee’s behavior, but again, Linda linked to the things that you have been working on. Is that something that either of you have done? Has it gone well? What are some strategies that folks could use to do that?
Dr. Linda LeBlanc (26:39):
I think when your numbers are bigger, such that there is some perceived anonymity, and you want it to be real anonymity. Sometimes there's real anonymity, but it's not perceived as such, I think it goes better. It's more reasonable to expect an honest response from one of those structured surveys. Otherwise, you do often get an effect where people tend to report positively for any of a variety of reasons. In fact, one of the recommendations in these kinds of surveys is anything that's not a four out of five, five out of five, eight out of ten, or higher indicates a problem because there is such a positive response bias. Even just middle scores on those kinds of rating scales can indicate that there are problems. We know that there's response bias built into those, and it gets ever bigger when it's a small number of respondents. I have used those like companywide, and it's always something that you're battling against. I don't use them for myself because I usually have a very small number of respondents, so I know there's no anonymity. I tend to prefer a more informal approach rather than a survey-based approach. If I had a whole bunch of supervisees, I might like the other approach. Lisa, what do you think?
Dr. Lisa Britton (28:27):
I agree with you in terms of some of the challenges that you get with doing that survey approach. I have not had the opportunity to do that with stakeholders, but I have done some surveys with trainees and it has been a really nice way to get some additional information on what I can do better in terms of video models of particular skills or, “Hey, I really want to learn about such and such and that isn't currently within my curriculum,” and things like that.
Dr. Linda LeBlanc (29:03):
Yeah. I like that. It sounds like you do a survey that is really more about, rather than a rating, “How good is this thing?” It's about, “What are some things that would be useful to have? What are some things that could be different?” Whether you're saying that live and a conversation, or providing a written opportunity to answer that framing of it as, “What could make this more valuable? What might be nice to have more of? What might be nice if it were different,” I think it makes it more okay to respond in any way. The right answer is, “Gimme some stuff.”
Dr. Lisa Britton (29:51):
[Laughing] Yeah, I think that's a really good point and as you are articulating that, it made me realize, I do have some questions that are for scale type, and I almost always ignore that information. I go to the open-ended information because that's where the meat is.
Dr. Tyra Sellers (30:08):
Yeah. That's where the good stuff is. I appreciate you talking about the considerations around anonymity and perceived anonymity, and some of the risks associated when you don't have sufficient anonymity. I think that it’s really important and I often have people saying that they wish that they had enough trainees, that they could just use an anonymous survey. I get that, and I understand the perceived idea that if I can get feedback anonymously, it will be honest. I just want to throw out there that I think that should be your last go to. I think that you should focus on creating a relationship and fostering a relationship that will allow for both parties to give that feedback directly to one another, and not require anonymity for us to say the hard things. I wonder if either of you have thoughts about that, if you have your own preferences about that.
Dr. Linda LeBlanc (31:21):
No, I totally agree with that. That's one of the reasons why I prefer to do it as a live Q & A to ask that question of, “Tell me something I could do differently in our next supervision, our next month of supervision, or something that you really wish we could focus on and target,” but doing it as a live conversation. Even if I ask the question now and say, “Think about this a little bit, and you can either tell me next time or email me,” or what have you. I generally prefer to have it as an interaction that's dynamic.
Dr. Lisa Britton (32:04):
Yeah. I hope I'm not putting words in your mouth in terms of what I recall from the book itself, but I feel like you guys talk about this. One of the things that you also talk about, if I remember correctly, is to take culture into account. That some people are going to feel very awkward providing feedback to a supervisor because of their history. We need to be thoughtful about that as well. I think there's some ways that maybe we can couch this and in that intro of the relationship at the beginning, having that conversation about, “Well, how do you like to receive feedback? How do you like to deliver feedback? Do you want to deliver that through an email? Do you want to deliver that through a survey? Should we have a conversation at the end of each meeting?” Really not having “this is how I get my information” but tailoring it to the needs of the trainee.
Dr. Tyra Sellers (33:09):
Absolutely, and I love that you brought it back to being culturally responsive. I suppose my perspective is that it can be easy to say, “It's gonna be difficult for this person to share feedback with me, so I'm gonna go the route of providing anonymity” versus, “I've got that in my back pocket if I need it. Let's see if I can build a relationship where I show them, they can trust me.” I want their feedback, it's valuable to me, it's gonna make me a better person, I'm gonna implement it. Many of my talks about supervision, that's what people want to know. How do I get anonymous feedback? Well, here's how you can do it, but first think about, do you want to go there first or are there some other things that you could do? I will say people increasingly have asked for strategies, “How do I facilitate starting to get my trainees and supervisees to feel comfortable giving me feedback about my practices?” Maybe we can chat a little bit more about that. For example, I know that things that I try to do is if there's any natural feedback opportunity that happens and I provide people with a lot. As Linda can attest, I am a terrible typist, to the point probably of some people getting frustrated watching me type. I'm sharing my screen and I'm typing something and a supervisee or someone, a trainee says, “Oh, you typo’d that word.” To me, that's the start of, “You gave me feedback as your supervisor. Thank you for avoiding me looking less professional than I would like to. Thank you for putting yourself on the line to tell me that.” I try to capture the tiniest instances and really tack them as, “Thank you for that feedback,” even if it literally was just, “You typo’d that thing.” How about you two?
Dr. Lisa Britton (35:24):
I like that, and it almost feels like a shaping procedure where you're starting small, at that person's baseline and reinforcing those successive approximations so that you can hopefully get the feedback that is truly going to be meaningful in terms of changing your practices.
Dr. Linda LeBlanc (35:46):
I think probably my primary strategy is certainly if I make an error, I'll give myself some feedback. I want people to see it being okay that I can notice my own mistakes and just talk about it. I don't mean casually, but I mean without a whole lot of self-consciousness. I think the other thing I do is try to use open-ended questions whenever I can, where any answer you give, as long as you think about something, is useful and helpful and meaningful and will move us a little farther down the line. Whether it is, certainly I've already said a few, “What is something that I could do differently? What's one thing you hope I keep doing?” Those kinds of things, but I think particularly where people get a little more nervous about evaluating their feedback is when they think it might not be going as well as it could, to be honest with you. We're all happy to get that data when we're crushing it. [Laughing] When something feels a little nervous or tenuous, whether it is in the relationship or just that I'm not seeing the change that I could, I sometimes try to find a canary in a coal mine repertoire. If this isn't changing, I'll bet nothing else is changing either. One of the things I like to do is ask a certain kind of question each session, “What did you notice or think to yourself about what you were doing while you were doing it?” Very often the first responses to that are, “What?” [Laughing] “I didn't think about anything while I was doing it. I was just doing it.” If the repeat, and then of course talking through like, “Well, as you're doing that next time, try to notice: Does this feel easy? Does this feel hard? Does this feel…” Whatever it might be or, “Am I noticing something about either this client's behavior?” Even in supervision, you could notice, “Oh, that's something I hadn't thought of before.” If over time they don't start to have better answers to that question, I know I'm not yet doing something that is producing some tacking of experiences while you're having them, which is what you need for insight, and to be able to start learning by yourself. What I'm trying to produce is a little bit of that self-noticing repertoire. If I'm not getting any of it, I'd start to feel a little nervous that my supervision's not moving the needle on that, and it might not be moving the needle on some other things as well. I'm pretty sure that if I keep in there, I can teach 'em to do discreet trials or teach them to write a purpose statement for a program, or what have you. Get the data collection accurate, but it's those kinds of repertoires that I almost feel like I need some repeated data points, where I can ask the same kind of question. I should, over time, be able to see a more robust answer, because it shouldn't be unexpected. If you're asking that, just about every supervision session or after you observe them, they ought to be expecting it. [Laughing] If they still can't provide a good answer, it suggests you're not moving the needle with your strategy.
Dr. Tyra Sellers (40:17):
Yeah. I love that.
Dr. Lisa Britton (40:20):
I agree. I like that reference to being able to tack what they're experiencing. I think that's a critical feature.
Dr. Tyra Sellers (40:28):
I think one other source of information for me has been to just talk things out with a trusted colleague or a mentor. I think it's difficult to have someone actually be able to observe your supervisory practices. There are rare situations where I'm supervising Lisa and Linda is supervising me and I can say, “Linda, come watch my supervision with Lisa.” If you have that, that's amazing and use it. I have rarely had that as an opportunity. Often what I will do is just describe things to a colleague or a mentor. They frequently are able to point out areas of growth that I might not have been able to notice, because I might even be describing something that I think is going really well and I might hear from them, “That's certainly one way to conceptualize that. On the other hand, have you ever thought about blah, blah, blah effect? Or maybe if you tried X, Y, Z.” I wonder if either of you have leveraged that in terms of trying to evaluate the effects of your supervisory practices.
Dr. Lisa Britton (41:50):
Yeah, certainly I've had conversations with other people, and I think I haven't had the opportunity to do this, but I think another aspect that may be really helpful is being able to observe somebody else as a supervisor to really get a feel for, “What could I steal from this person that is so awesome that may take me to the next level?” Something that I hadn't even thought of doing. I think that definitely leveraging other people who are doing some similar work would be a great idea.
Dr. Linda LeBlanc (42:26):
I think a naturally occurring opportunity when I had the circumstances to be around another supervisor when either someone is transitioning the supervision to me or I'm transitioning the supervision to them. I'll often say, “Let's do at least one meeting together and we're gonna introduce or what have you,” but then I try to be mindful to have this mix of, “Let me show you a little bit how I might ask them questions,” or guide them through talking about their cases and then turn it over a little bit. I get to watch maybe how this other person asks different questions or maybe uses their facial expressions to differentially reinforce behavior in a different way than I would. That doesn't happen all the time, but boy, when it does, and you've got an incoming and an outgoing supervisor, no matter which end you're on, it's a great opportunity to see how people can respond slightly differently to the same circumstances. It's also interesting to see how the supervisee respond a little differently. Are they brighter in that joint session? Are they a little bit more reluctant, but they warm up over time? Any change, whether you intended it or not, you're contriving it or you're capturing it as it's going. Seeing how it changes the supervisee’s behavior can provide you with at least to think about, to self-reflect. I wonder why that was different? Why did I notice so much that the person was asking great questions? Maybe because I'm not asking some of those questions and I'm gonna steal 'em. [Laughing]
Dr. Tyra Sellers (44:38):
Yeah, I think those are great strategies. I love that idea of even just exposing your trainee or supervisee to other sources of supervision or what have you, just observing how they react. We are getting close to the end. I wonder maybe Lisa, can you talk a little bit about strategies for using all of this data? When you have trainees, you've got folks that you are running through and you're helping them with their supervised field work experience hours. What are you doing with the data that you capture? Whether or not it's their specific data, client related data, your own reflection on your interactions with them. What do you do with it?
Dr. Lisa Britton (45:23):
I make behavior change. [Laughing]
Dr. Tyra Sellers (45:27):
Dr. Linda LeBlanc (45:27):
[Laughing] Yeah, you do!
Dr. Tyra Sellers (45:28):
Mic drop! The end. [Laughing]
Dr. Lisa Britton (45:32):
[Laughing] Yeah. Let's say, for example, I do a group supervision session and I deliver a training on a particular topic. I go through my didactic training; I show my videos and things like that. I keep that PowerPoint open and then I go back, and I make changes in that PowerPoint later on, based off of how that flowed, what went well, what I want to do differently. Which scenarios really seem to confuse people as opposed to enlighten people? Things like that. I think that's one strategy that I take with that ongoing improvement. I know that in one of the articles that you wrote together with Amber Valentino in 2016 was related to that supervisory relationship and the importance of when you get feedback, you gotta make the change immediately. It's just so important. There's nothing worse than putting that behavior on extinction.
Dr. Linda LeBlanc (46:44):
Don't expect to get it again. exactly. It doesn't take very many trials of you didn't make the change and that'll be gone.
Dr. Lisa Britton (46:53):
Dr. Tyra Sellers (46:55):
Yeah. I even think taking a moment to tack it explicitly to the feedback giver. Even if they didn't give you the feedback directly, but you picked up something to say back, “Hey Lisa, remember two weeks ago, when you gave me feedback about X, Y, Z? This is what I've done. Is it fixing the issue? Is this better for you?” Assuming that the change is good, maybe I've tried it out in other supervisor relationships, and I can even report back, “This actually has really helped me streamline processes or get better outcomes with my other trainees and I just want to thank you.” That idea, when I think about doing the performance diagnostic checklist for human services, which is something I love. I love that idea of considering, “Does the individual ever see the outcome?” When you think about feedback, there are so many times that we make the change, which is awesome, but what if that person can't expressly tact or don't know the degree to which that change has made your life better or other people's lives better? Why don't we just say it? I've tried that out over the last few years and it's been very effective for me, and also just provides that opportunity to engage in mutual, positive regard with your supervisees and trainees.
Dr. Linda LeBlanc (48:19):
It gives you the opportunity to model being appreciative of feedback and using data. We certainly want to model that every day, if we can.
Dr. Tyra Sellers (48:31):
Dr. Linda LeBlanc (48:32):
One last thing I would love for us to talk about is the notion, particularly if it's a trainee, you learn to be a supervisor, very often from your supervision and that notion of being really transparent about the fact that you are evaluating the effects of your supervision. You are making changes accordingly, that transparency hopefully is gonna influence their future supervisory practices too. What do you think about that?
Dr. Lisa Britton (49:10):
Absolutely. Whether we realize it or not, we are modeling supervision and those supervision practices all the time. We want to be thoughtful about what it is that we're communicating and whether or not that is what we want to continue within the field.
Dr. Tyra Sellers (49:31):
Dr. Linda LeBlanc (49:31):
No super sneaky data, transparent data. [Laughing]
Dr. Tyra Sellers (49:35):
Yeah. Not super sneaky. I'm 100% a fan of tacking what you're doing and why you're doing it for your trainees, because I used to be terrible at doing that. So bad! I really had to develop that skill. I'm not a good self-observer. I'm not really good at tacking what was effective and efficient for me and what's not. That's not so great for me, but it's even worse for anybody I'm trying to train because how can I describe things then for them? I agree 100% that it’s a critical piece in everything we do, but especially in evaluating the effects of our supervisory practices. I think you get this other effect, which is communicating that I value you as a trainee because I care that your time is valuable and that I produce the best outcomes I can with you. The way I do that is by making sure that the things I'm doing are effective and not only effective, but also acceptable. There's a social validity piece. I'm moving the needle on your acquisition, but it also feels good to you. It's you like it and that's what I have to say about that.
Dr. Lisa Britton (50:55):
I think you guys do a really nice job of talking about that in terms of the importance of that bidirectional, collaborative relationship. When you establish that expectation from the beginning, it just sets the occasion for those conversations to go both ways.
Dr. Tyra Sellers (51:15):
Yes. [Laughing] Absolutely and I think that is such a great place to wrap up this conversation. The idea that having a bidirectional relationship, where you both have skin in the game, you both value each other, you're both watching out for the positive outcomes really is the crux of why you would evaluate the effects of your supervision.
Dr. Linda LeBlanc (51:47):
Absolutely. One of the most important things we do, one of the biggest impacts we have on the field and it's worthwhile to do it well. Don't assume you're doing it well until you have some data to support some of that and really let it be your area of continued evolution and growth. Wonderful. Lisa, we are gonna wrap up, but I will just say one thing: In your consulting practice, you do a lot of remote supervision that your clients are supervisees, trainees, what have you. Do you offer group supervision sessions for people if you're not doing their individual supervision? I think there are lots of places where people are getting individual supervision, but they don't really have an opportunity and a structure to get some group supervision. There can be some really important and valuable components of group supervision. Not only you can do some different things because you have peers, but it's also another different supervisor that they can see. Is that something that you are offering with your consulting service?
Dr. Lisa Britton (53:06):
I do have group supervision. I don't currently have anyone who's just solely doing group supervision with me. One of the challenges is that I would have to link up with their organization so that if they're counting those supervision hours as supervision, they're meeting that 50/50 requirement in terms of the amount of time that they're in group versus individuals. There are things to iron out in order to make that happen. I have worked with some people in the past who simply just wanted to be a part of that group and didn't even count that as their supervision hours, because they found that to be of value.
Dr. Linda LeBlanc (53:51):
That's fantastic. That's really what I was thinking of. Once you have that experience of learning in that group setting and seeing other people interacting with the same material as you, that's appealing and might be an appealing option to people out there. Thanks for all that you're doing to make the world a better place for the people you supervise, and also for contributing to the literature on supervision, because it's starting to grow. We're starting to get there. There was a time when we didn't have much. Very grateful to you for having written your book and doing all the good stuff that you're doing and most importantly, for you joining us today on this episode of The Lift.
Dr. Lisa Britton (54:39):
Well, thank you. I was so happy to be here with you today.
Dr. Tyra Sellers (54:45):
Thank you so much, Lisa. This is a great conversation, and we will be back with another episode. Bye everyone.