Episode 45

full
Published on:

1st Jul 2025

Vital Signs and LMS Analytics: Is Your Course in Stable Condition?

In this episode, Camie and Alex discuss how to take the pulse of your asynchronous course using LMS data, student feedback, and your own professional judgment through reflection. From ghost town discussion boards to unclear instructions masquerading as rigor, they offer practical tips (and a few laughs) to help you diagnose issues early - before your course needs a crash cart.

Transcript
::

ID stands for instructional Dr.

::

In today's episode.

::

Alex and Camie are taking a curious, honest look.

::

At course evaluation.

::

We'll talk about using student feedback, grade patterns, and your own insights, not just to measure your course, but to understand it, improve it and celebrate what's already going.

::

This isn't about perfection, it's about perspective. Let's dig in. So when you see a really controversial video on social media, what is the first thing that you do?

::

Like, comment and subscribe follow for more.

::

That might be the first thing you hear.

::

No, I can't. I follow every controversy dance that I find online. You gotta really confuse the algorithm. My first.

::

Like first inclination, after sharing it of course my.

::

Right, obviously.

::

Favorite actually is like.

::

To share it, but after that.

::

Is go to the comments section because and if I'm lucky, I'll go to the.

::

MHM.

::

Comment section before I.

::

Share it because then you can, you know, tell other people to go there.

::

Sometimes there's more gold in the comment section than there is in the actual content at the beginning.

::

I always find that just to be an interesting.

::

Support there.

::

It can be validating too, because oh, I was already thinking that. Oh, and I'm not alone. OK, I'm not crazy. Everyone sees what I'm seeing. Yeah.

::

No, they're 35,000 other people.

::

Were also thinking.

::

There you go.

::

Or you can get a different.

::

Perspective.

::

Which can be helpful in other ways if you're.

::

Open to that this.

::

Comment on the Internet just changed my mind, said no. Whatever.

::

Doesn't have to change your mind, but it can go. Ohh I didn't.

::

I know.

::

Think of it like that.

::

There. That's fair.

::

And evaluating your course or.

::

Looking at your course evaluations.

::

And kind of feel like that can kind of feel like reading the comment section on on a video.

::

And sometimes it's just as mean as comments section.

::

Sometimes more so.

::

Right.

::

It and you know, so it you could be it could be confusing because students sometimes when they leave feedback may not.

::

You know always, always be direct, so it could be vague and you could not understand what they're talking about.

::

It could be unhelpful and even a little soul crushing because.

::

Yes.

::

It's personal. Your course is very personal, but you could also get something super helpful and think, hey, I was thinking that too, but it's something, you know, in the midst of grading in that really busy week. I didn't think to change.

::

Yeah, it's always important to have a growth mindset when it comes to reading these kinds of comments, not a fixed mindset.

::

Going in and you don't have to always take everything that's said.

::

Yeah.

::

We're not neutral when it comes to valuations because.

::

Right.

::

This is like.

::

All the work and that you've poured into it, it's a personal experience.

::

At the same time, though, I think this is slightly off. The main gist of what we're talking about, but ideas are also not.

::

Our identities, yes.

::

Yeah.

::

So it's also important to not be so invested that yes, the.

::

Simplest feedback or even some.

::

Rather mildly antagonistic feedback doesn't send us spiraling.

::

Right, because we're never going to please everyone in course development, nor should we. Everyone's experience is different course development is as much.

::

MHM.

::

Of an art as it is a science.

::

And so with that in mind, some people just aren't gonna some people. Yeah, some people just aren't going to like what you develop.

::

And it's ever changing, right?

::

But that doesn't mean what you've developed is bad. We want to look at data points. We want to look at feedback we want.

::

To look at.

::

Actual metrics that we can use to inform us and and get a holistic picture when we talk about course evaluation.

::

I think when we're talking about course evaluation, it's really important to get that bird's eye view instead of zooming in on those handful of, you know.

::

Antagonistic comments from students on their course evaluation.

::

We have several other tools that we can use to kind of zoom our lens out a little bit and look at the bigger picture of what's happening in our course. But remember, when you're doing it, it's not about judgment like.

::

Even when you're looking at data points, this is not about the judgment of you or even your work. This is about curiosity and improvement and bring out what's working, what's not.

::

Thank.

::

You know shift, how do you respond to what's going on in that moment? We also know that.

::

Of each grouping of students is a little bit different, right? They have different student needs.

::

And as we have.

::

No.

::

Different generations come in.

::

That can make a big difference too.

::

Yeah, that's.

::

Maybe what I was hitting on with the idea of it being as much of an.

::

Art as it.

::

Is a science. You can deploy the most evidence based pedagogical implementations in your classes, yet there can be a million factors going on outside of a classroom that impact how students individually and then even collectively perceive of courses.

::

Experience, or even perceive you.

::

And again, a lot of those things are going to be largely out of our control. So what can we control and that's where evaluating feedback unit data points.

::

Thinking of it as a constant experimentation process, staying curious straight, almost like a scientist, know input that you receive is necessarily.

::

Condemnation. It's just new information to kind.

::

Of retool.

::

Respond.

::

And respond to and then.

::

To.

::

Yeah.

::

Develop and grow from there.

::

If that continuous improvement.

::

Yeah, growth mindset.

::

And we also, you know it supports student success and sometimes you have to do this anyway because of accreditation or institutional review and that type of thing.

::

There are a lot of reasons why we're course evaluating and so one way.

::

Yep.

::

That we sometimes evaluate our courses is by looking at.

::

The grading metric.

::

Our student scores overall.

::

And sometimes after a lot of times, people even zoom in on the D's and W's withdrawals.

::

Yeah, not Dallas, Fort Worth.

::

Right. No, no, we're not talking my airports here these S&W's.

::

In individual courses, you can track them overtime so you can look at it just for your.

::

One semester, how did that go?

::

You can look at them over if you've taught this course for several years in a row.

::

Has that percentage changed overtime?

::

Now.

::

There's a certain mindset that you have to be in for.

::

Looking at this type of metric right and that is.

::

Now the the problem with it is that the underlying assumption with looking at these S&W's any rating metric like that. If you're looking at percentages is that it's based on.

::

The Bell curve framework.

::

And the Belco framework just says that you have, you know, the most C students. You have some B&D students, and you have the fewest A&F students, just like on a natural bell curve, there are different percentages that come out with these. When people talk about it.

::

But do you just I kind of think of it in terms of you?

::

A's and F's make up about 10% each. B's and B's are about 15% each. C's are about 50%, and again, these are just approximations. It will look different if you actually research this concept.

::

Right.

::

But that's just the way that my brain.

::

Kind.

::

Of I can picture what 10% looks like right? I can picture what 15% looks like. It's easier for my brain to kind of chunk that up that way. When I'm thinking of this concept.

::

Now, that's not to say that this doesn't.

::

Tell us anything though.

::

Right. Because I I think the question I was going to ask and I think I.

::

Know your answer.

::

But I want to hear is on an initial assumption of that, would you say that is a?

::

Accurate way to look at.

::

Grade distributions in courses in general. Do you think that most students should be receiving CD?

::

'S.

::

Based on performance, the fewest should be receiving A's.

::

And F's.

::

By design or by.

::

I I actually do not follow this this approach.

::

I am more geared toward the mastery approach and honestly, when I look at, you know, we.

::

Work with a lot of instructors.

::

And when I look at the student grades in these courses.

::

That's also not how they grade. They don't grade. No one that I know that I'm working with currently is using this bell curve approach. It's a little mismatched from what is happening and what we're looking at, right?

::

They're saying, oh, if I see that I have 50 students in the class and 30 of them got to see, OK, I'm doing a good job because I'm my average amount of my students are doing an average amount of the majority of my students are doing an average level of work.

::

Right.

::

That wouldn't be.

::

I wouldn't think maybe there are some instructors that might think that way, but I don't think that would be to a good chunk of instructors that that's what they would want to say if they were bragging about their course to I set up this great course and I got. I got so many students, got a 72, it was great. I only got one guy, got a A1 gal. Got a 90%.

::

Most of my students I can see.

::

Again, you could have some some folks like that, but.

::

But I don't know any. I I have not seen any in the however many years I've been here now three years.

::

Right.

::

Well, it's it's the, it's the, it's.

::

The cinematic stereotype of an instructor like I only only three of you out of the 100 in this lecture will.

::

Pass.

::

You know it's, it's that.

::

Right, because we.

::

We create based on mastery. Most the time are you mastering this concept or the skill?

::

It's not. We're not looking at you and going OK, how many of the students have worked around here for the class? It's this average. No, we're saying. Did you meet these expectations? We'll talk a little bit more about mastery and what that looks like in a little while, but that doesn't mean still that these these F&W's can't tell us anything.

::

They still are data that we want.

::

There's still, right? There's still data that you can choose.

::

To look at.

::

To look at.

::

And we don't know what it's telling us.

::

We.

::

Right. And so that's the thing. You know that you can see that something is happening but you don't know why.

::

Do you know if it's signifying how engaged students are? Is it showing that maybe there's a really large class size, especially for those Gen. Ed courses, and that might not be working out well for student success?

::

You know, is it showing that there are challenging topics that don't have enough foundational resources to support students bridging up to those concepts?

::

Right. And do the gradable items within your course actually scaffold in a way that allows students to reflect their learning in accurate ways or?

::

They.

::

Dealing with high stakes, high test anxiety outcomes because 80% of the grade is built on to midterm like a midterm and a final. Yeah, and that's it.

::

Right.

::

Or it could even be that you have one of those upper level courses that don't have a prerequisite, but.

::

Should.

::

This can it can look like a lot of different things and so, but it's difficult unless you look at other things you.

::

Can't just look at that.

::

Have you watched the show the pit?

::

It's like I've maybe seen an episode.

::

So it's Noah Wiley from ER, it's his new show where he's the chief.

::

Your.

::

I've just seen the commercials. I haven't.

::

Doctor.

::

Watched it yet I'm excited.

::

So it's great the the the connection I'm making with this is you know when they so that's a 24 hour or it's a. So the show is centered around one shift in Pittsburgh Medical Center trauma unit, the ER there and he's got a crop of interns and Med students and his residence that he's working with and just the.

::

Layoffs that ensues and just work.

::

And in one of the busiest.

::

ER's in the in the country, and So what would the thing that it is making the?

::

Connection with me here.

::

Is they have a patient come in, you know you've got two or three.

::

Medical professionals in the room talking to the person about what's going on.

::

And they have.

::

To essentially use all these data points and but they're looking at symptoms looking at in the DFW is are the symptoms right? Like, oh, they're coming with fever, aches and pains, chills. That's not telling you it. It could tell you a handful of things. It could be, but you're going to have.

::

To do more digging.

::

They have to then do blood work or.

::

Run tests, run panels. They have to ask more questions, they do something. Differential diagnosis like what could be the other thing. This could be outside of the norm that and again, that's some medical expert could inform.

::

Yes.

::

Much more clearly that I'm informing right now.

::

But that's I'm I'm seeing the DFW.

::

Grade distribution as the similar as the symptoms when you come in when a patient.

::

Comes into the ER.

::

It's telling you something is going on. Something is wrong, right? But we, but we need to investigate. We need to look at other components to figure out what this could actually be the root cause of what's causing these symptoms.

::

You have to.

::

Yeah. And it's funny that you mentioned the differential diagnosis because that is for me one of my favorite tools in medicine.

::

Because you look at those symptoms and then you go what are not just the probabilities, but the possibilities, and then you use evidence to exclude things. It's not just going, oh, I don't think it's that it's there's not evidence to support that. It is this thing. And so I do like that a lot and because that's exactly what we do when.

::

Yeah.

::

Right.

::

The.

::

When we're looking at these and we're going OK, could it be this? Could it be that? How do we investigate that? I will.

::

We're basically doctors.

::

Oh course. Doctors, every time. You know, sometimes it does feel that way.

::

Yeah.

::

For saving lives here, folks.

::

So the DFW is a lot of times those lower scores or the withdrawals are what we look at because when people see that they see.

::

On the opposite end of that are A's, B's, and C's. Most people don't look at these when they are going through course evaluation, but I believe that they should because they do tell a story just like those D's F&W's.

::

And of course, some of that story could be that your course is set up really well and that students are succeeding in your course, right? They're mastering the things they should master.

::

Right. If if, yeah, if you set it up for the desire for mastery, it could be that.

::

Right.

::

But.

::

If.

::

Like the DS and W's assumption, we focus on that Bill Curve. A disproportionate number of A's and B's would also be problematic, right? And also, you can kind of.

::

Look at your course and know we we already established that most instructors are not grading on the bell curve.

::

Most instructors are grading based on mastery, but also their courses where it's participation. So is it showing mastery at that point? If you know we're talking more about a participatory grade?

::

Or or things like that. So we even with that, we don't know exactly what it's saying even though it, you know the assumption would be that things are going well, that's not necessarily true.

::

We can establish.

::

That the DFW's and the bell curve is a possible way to interpret it.

::

Maybe not the best, but at the same time the grade scale.

::

And great reports can be data to tell us that something's going on, but we've alluded to master. We've talked a little bit.

::

About it, but.

::

We both would probably agree that mastery teaching for mastery, evaluating for mastery is.

::

MHM.

::

A more comprehensive or reflective way to look at course outcomes.

::

Evaluation. So let's just say more about that.

::

Right evaluation, because that's what we're already using. This is a little more, it's going to be a little bit more specific. So just for those who aren't used to this concept, mastery focuses on the mastery individual students achieve on specific task or knowledge in your course.

::

Right.

::

They are largely using objectives and specific aligned assessments to evaluate the level of mastery per objective, and I would be remiss.

::

If I did not mention this next thing which we mentioned in every episode, and that is rubrics.

::

Got to mention Rupert because.

::

10 rubrics are used in this form of observation if you're directly aligning those assessments and objectives.

::

That's not always true, though. Some people without that. But, but that's one of the forms.

::

That people use.

::

This one that does give us a little more specificity because you know we're leaning in to where students are struggling. You can see where the holes are. It's not just that overall grade, but it still lacks the ability to give us a call.

::

Yeah.

::

You still don't know why students are struggling in those spaces.

::

No, but we can though.

::

Especially with rubrics and we can we can double click down deeper into evaluation of individual components. This is where you ever worked with a A staff member and ID at global campus, we hammer home backward design. We hammer home alignment of objectives, we hammer home, blooms, taxonomy, those kinds of things, measurable verbs, because we want to be able to.

::

Like you said earlier, when the doctors are doing the differential diagnosis, not just say I feel like this might be the case, you know, they have to look at evidence. If we anchor it to.

::

Backward designed.

::

The outcomes are linked to measurable assessments. Those measurable assessments are linked to measurable activities with measurable objectives. We can find. Investigate those sticking points more easily, right because it's easier to say if I if I want a student to be able to identify the differences of these different.

::

It's easier to investigate, yeah.

::

Glasses or species of of butterfly?

::

When they struggle in that particular assignment, that is evaluating that, I can see how they they missed comparison and contrasts with the way I said the segment better than if I just vaguely said I.

::

Want them to learn about this right?

::

So again, that's why.

::

It it starts really, I mean, I know we're talking about evaluation, but it really starts at the beginning of design and development from the onset, we'll get better evaluation, evaluative outcome.

::

When we set up the course.

::

When we plan it and then design it and then execute.

::

My, my, my brother-in-law puts it this way. When he he works with CEO's and coaches them and his. But he basically says like every system is designed to get the outcome it produces.

::

Yes, and and I am.

::

I'm a system thinker, and so I also kind of.

::

Lean into that a little. Of course, there are always anomalies. They're always, you know, students. So we're not saying if one student is having a hard time, but when you notice patterns in data, you know, if 80% of your students are not understanding this concept.

::

And there's something in the design that we can look at.

::

Right.

::

And so looking at grading in general.

::

Things that we can do to kind of.

::

Boost that, understand the data a little better, always to ask ourselves through whether we're looking at those DSW's and going with this or we have specific information from our mastery base.

::

Rating here it's give students have any learning gaps that might have hindered?

::

Progress and just to make a little plug here, we do have an episode on so.

::

I like it.

::

Subscribe for more.

::

Back and and take a listen to that. If you're unfamiliar with learning gaps, learning gaps aren't always about knowledge, they can also be about resources, environment, motivation, and skills. So.

::

Yep.

::

There are ways to assess for those, and sometimes that that is the problem.

::

You can also look at yourself as the instructor, right. Did you hold students accountable to mastering those materials and skills?

::

What would that what would?

::

That look like in an example.

::

Just to make it more concrete, we put.

::

You.

::

On the spot.

::

Rubrics. I mean it, it can. So I think for that for me, when I have been in a an instructional position then it is about saying, OK is what?

::

It all comes back to reprint.

::

Here.

::

Our goal is for this week.

::

This is what you need to learn and then making sure that they're producing evidence that they have learned that thing.

::

On my end as the instructor, it's my responsibility to.

::

Resource them low so they can learn that and facilitating that, learning that they're doing, and then I have an assignment that that proves it right and and they know how to get there.

::

So.

::

They know what they should be doing.

::

And then they do the thing, and then I, as the instructor go through, I give them feedback, whether they've done really well. I don't just go. Ohh, excellent job and move on. Right, I say.

::

Hey, you did an excellent job here, here and here. Here's how you can extend this next time.

::

Because mastery is still, should be seen as an evolutionary concept.

::

Always. Yeah. There's not, like a.

::

I know people want to say, oh, that's a plus. That's.

::

As good as.

::

They get no. You can always get better.

::

And because that kind of leans in more to that growth mindset and now that doesn't mean that you're not making yay, right?

::

Right. Because I could see somebody maybe in like the mathematics field say, well, I've put these 10 problems in and they answered all 10 correctly, right. But what we could do is if you have them show their work.

::

M.

::

Are you seeing any areas where maybe they arrive at the right conclusion, but?

::

Are there areas you can improve the the flow from which they work through an equation to get to?

::

The final out kind.

::

Right. Or show them a different approach to do so, or or even. What is the next step after they solve this equation? What could they do with it? How do they put it into context? Or maybe even like visualize the math process? So yeah, so there are lots of ways to kind of.

::

Right.

::

Yeah.

::

Hold students accountable.

::

In your course.

::

But a lot of it has to do with you providing feedback and them giving you feedback on that feedback, right? That may look like an opportunity for them to increase their score it, but it could also just look like an opportunity of ohh hey, what are some resources I could look over for this feedback that you gave me holding your students accountable?

::

And.

::

Make a huge difference in the course, but it does take kind of that relationship I think. Have you know with them?

::

Right. And that also having the relationship?

::

Changes then how you view the evaluative feedback, whether it's on the positive side or even on.

::

The slightly or?

::

Significantly negative side if you've built the rapport with the students, then typically when they when you do get feedback, most time it's anonymized, but you at least.

::

Hopefully it's coming from the place of mutual respect and understanding that we want to grow and we want to improve.

::

And if we're talking, I'm talking maybe specifically about, like an end of course evaluation.

::

But I think this is where instructors can be more open to iterative feedback in a more regular cadence built into the course, like even more developing courses this summer for some new ones to be launched in the fall next spring. And I was reviewing our planning document with an instructor.

::

Yes.

::

And the instructor put a mid semester evaluation of the course in the midst of the planning document, and I wrote on there. I comment on this is a great practice like keep this kind of getting feedback because ideally you'd.

::

More often, but that's no go ahead.

::

We'll get into this a.

::

Little later too, on student feedback. Sure, sure.

::

No, and I will. I will say in terms of more fixes for grading.

::

There's a tool in Ultra where you can analyze your quiz or test questions.

::

You can also use AI or your own brain to analyze.

::

Questions or assignment prompts.

::

See if they are.

::

Written in a confusing way and also make sure that you have resources available.

::

That.

::

Students know that they can learn those things or perform those things, even if it's a skill or like a certain format. You're asking them to use.

::

We talked about these things in several.

::

Episodes. So this is not a new topic.

::

But doing that gives you better data for then going oh, wait a minute, are there certain skills that the majority of students are missing?

::

In some way and is not fixed for that. Adding resources to my course or is the fix for that requiring A prerequisite to this course?

::

And that's a a conversation, obviously, to have with your department, so.

::

But just taking a look at that.

::

Is really helpful.

::

One other thing, I'm just going to mention right here is that.

::

If we can.

::

Also kind of listen to students in terms of I had a course recently where it was a number two of a part and there was like A1 and A2.

::

Yes.

::

And in the first course they had it set up a certain way. Well, they're having a lot of student feedback, but the second course.

::

Wasn't really giving anything new to the students.

::

So kind of also keeping that in mind as you go is how do you it's not just this is too difficult but also are you at the appropriate level and and?

::

Yeah, the desirable difficulty is is a real.

::

Yeah. And that for me is also part of like holding students accountable, right?

::

Yeah.

::

But as Alex mentioned, student feedback is one of the other things a lot of times we look at that year end thing, we do end of course evaluations and I even know some structures who will avoid these because.

::

They've either gotten burned in the past from looking at them.

::

And sometimes.

::

Unjustifiably so, you know, sometimes they just get some mean comments in there and not not really related to anything in the course that could be improved or on them that could be improved just because you have a disgruntled student sometimes.

::

Yeah, there's a million factors. It's the end of the semester. Students are tired. They want to move on.

::

Right.

::

Yes, but that's also kind of the crux of the problem, right is you have students at the end of semester well.

::

Yeah.

::

How are they making comments about the first of the semester? Because they are tired.

::

They have forgotten that's, you know, if this is a regular term that's 16 weeks ago. So that's a little.

::

Yeah.

::

It's a little.

::

Far fetched to ask them to remember details.

::

Yeah. And so it's it's almost another way to look at this is you could see that as one way to do it and it's not that those end of year evaluations aren't without merit or value, but they can be 1. Again, one data point and a collection of data points. But this is where you start to see in certain.

::

Project or process management development modes.

::

Something like, I mean in in our ideal world we have an evaluation model that is set up to only evaluate once something's created at the very end and then deployed. We have other models that you essentially create beta versions of something tested in active.

::

Right.

::

Ongoing scenarios and then take that information and go back to the drawing board, retool, rethink and continue to deliver till you get to that gold standard. And I like those more successive approximating models more so when possible because hitting it what you're saying there students are giving feedback when it's fresh when the experience is fresh.

::

So in a.

::

Classroom situation, whether an online or face to face, this is where at the end of the unit or the end of the lesson, you provide them a link or a form that they can just quickly fill out. Hey, what? What made sense this week? What was tough, what sticking points were there? Was there material that was exceptionally challenging or why?

::

Or, you know, asking those kind of formative questions that help you understand how they're feeling as.

::

They work through.

::

The.

::

Course, but then also reveal some of the content matter that might actually need more instruction and more assessment to help them work through it more clearly.

::

Yeah, and.

::

There's actually research out there that says students actually.

::

Are better equipped to evaluate their personal experiences in terms of like content, difficulty, or engagement than they are at evaluating your instruction.

::

Oh yeah.

::

As an instructor, right. And so when we do get student feedback then.

::

And their experience, like that's valuable information but it.

::

Ohh yeah, how they feel is in the instruction is way better than their qualification to talk about your ability to to teach, right? That's not what we should be.

::

Right.

::

Evaluating right. And so when we're talking about this, we look at, OK, well, how do I make this more engaging? It's been not about you, it's about the content. And I think in that terms.

::

Iterative evaluation after the lesson. At the end of the unit, something even.

::

More regular than mid semester right? Can be really helpful.

::

Sure.

::

What I mean by that is like even.

::

Doing something, yes.

::

Even if it's even if you feel like this is a lot to to add in, even if you can only to start off with a mid point in the semester, check in yeah is going to be better than waiting till that very end of the semester. But I I agree the more regular you can build that into a cadence if you do it from the onset, it becomes a norm.

::

Kids are used to it. They just see it as part of the.

::

Part of the participation in the class.

::

Yeah. And there are so.

::

Surveys when you build them into the course, they also don't have to be separate. You can just put one survey question at the end of your lesson quiz, right? That doesn't have to have points attached to it because an ultra you can you set points per question not per quiz.

::

Yeah.

::

Right.

::

So you can do that. You can even make that a bonus point question if you are so inclined to do so.

::

That can be really helpful because again, it just gets to be.

::

Part of your.

::

Course culture to give that feedback. This can also come as a course cafe style continuous discussion board if students are having problems or things like that, your inbox becomes flooded with discussion. It's bored and you go oh hey, something's happening. So it's a little bit easier.

::

To kind of have an open dialogue when it's when it's a norm.

::

Yeah, that's a good. It's a good like asynchronous online course standard for sure in, in, in the face to face, it's the.

::

Opening, opening that space up in class at regular intervals throughout the semester.

::

Asking those spaces or or sending out emails or having that built in, even if you're teaching in a face to face.

::

You still have usually access to an LMS where you can put that in there and it's just fine to have it right there.

::

Yes.

::

It is so I will say just some other options for evaluating your course other than grades and student feedback is to look at outside evaluation frameworks. We use quality matters for all of our online asynchronous courses.

::

That also applies to in person courses. If you are using Blackboard as a content option.

::

Because it focuses on alignment and accessibility, you can even use it for your in person courses on the alignment portion, even if you're not posting on Blackboard.

::

And saying hey.

::

I actually have objectives and I'm assessing them with these assessments. That's what I'm asking students to do.

::

Yeah, yeah.

::

So finding an outside framework that works for you and your context is really important.

::

Also self evaluation and just like the students in terms of self evaluation or in terms of valuation.

::

If you are self evaluating, do that consistently. Weekly or lesson or unit.

::

Monthly. You know what works for you? Where?

::

You can remember the most about.

::

What did students ask about the moose?

::

Right.

::

Where did you spend the most time troubleshooting? What surprised you the most in your grading?

::

Are your assessments evaluating the knowledge and skills they should be and what have you done to ensure students walk away from this course with skills they can take to the workplace?

::

Yeah.

::

So you can add of course questions to that, but think through what does it look like?

::

And one of the implements I've seen instructors do really well with this and some of the courses that I helped develop are.

::

They'll obviously, if you find a system that works best for you, that's great. But one thing is put a hidden document at the bottom of each weekly lesson and just have it, say instructor notes for the week and they ask they write out some of those questions, or they write out what were the main points that got brought up this week, or what were the main messages that I that I got, or how did this.

::

MHM.

::

Lessons seem to function and they keep that both for their own.

::

Reflection, but then also as formation for if that course is ever taught by other instructors in the future. Hey, this is a this is a historical annotation of things to keep in mind when you're teaching the course. If you encounter some of the same problems, then you can understand. OK, that's something that I was warned about and maybe that instructor was able to get ahead of.

::

Them, but then also.

::

So again, it's it benefits you in your own teaching and evaluation, but it could also be beneficial when deployed in some capacities to help other instructors too. So it it can be a win win in that regard as well. And obviously it hopefully helps the students as you then reflect on.

::

You.

::

Your own instructor self eval to improve the experience of the course.

::

Yeah, it's.

::

It's an easy way to also prompt yourself to remember later if you don't have time to make those changes or make the changes in the dev show. If you're working in a dev show. Looking back on that in addition to kind of being that.

::

Living history of the course and you know things that have gone well and whatnot, helps prompt you to make changes prior to the next semester. Moving forward, you can.

::

Also do that as.

::

A survey for yourself and you would just get the results in a in.

::

A different format.

::

But that's harder to share with other instructors, right?

::

Yeah, I guess that's fine. If there's a a workflow that works best for you, that's just something that I've seen and it's what's nice about that, especially in our instance here, our university is these courses that we help design.

::

You know are in part developed and owned in a sense by the instructor, but then also ultimately belong to the university so that we can continue to develop them and use them for other students and instructors down the road. And so having that share repository space of knowledge is really valuable. And as we continue to try and ever improve.

::

Is like we already mentioned, mastery never truly arrives. We want to.

::

Have that growth mindset.

::

I'm just going to hit all.

::

The highlights of the.

::

Talking points. Let's pretend that you do master and become expert, right. That doesn't mean your expertise cannot grow.

::

That's far.

::

That's what my degree tells me right now that I have a masters degree.

::

And I continue to learn. How about that?

::

So this week try adding a one question check in at the end of a module or quiz. What's one thing that confused you this week? It could be the smallest change that leads to the biggest insight. Thanks for joining us on the Pedagogy toolkit. Don't forget to subscribe.

Show artwork for The Pedagogy Toolkit

About the Podcast

The Pedagogy Toolkit
The Global Campus Pedagogy Toolkit is a podcast where we focus on equipping online instructors with the tools to foster student success through supportive online learning environments. We explore engaging online teaching strategies, how to design the online learning environment, supportive practices for online students, and how to stay current with higher education policies through discussions between guests and instructional designers.

About your hosts

Amalie Holland

Profile picture for Amalie Holland
I'm a recovered high school English teacher now working as an an instructional designer at the University of Arkansas.

Alex Dowell

Profile picture for Alex Dowell
Hey there! I'm Alex and I love learning! I have undergrad and graduate degrees in education and have worked in and around higher education for over 8 years. Discovering how emerging and historical technologies blend to improve teaching and learning really fires me up.

When I'm not podcasting or planning courses, you'll find me outside on running trails, reading, drinking good coffee, watching Premier League football, and hanging out with my family.

Feel free to ask me anything!

James Martin

Profile picture for James Martin
I'm an instructional designer at the University of Arkansas Global Campus, where I work with professors to make online versions of academic classes. I've spent most of my career in higher education. I've also taught college and high school classes, face to face and online. I’m passionate about education, reading, making music, good software, and great coffee.

Camie Wood (she/her/hers)

Profile picture for Camie Wood (she/her/hers)
Hi! I'm Camie, an instructional designer with a passion for teaching and learning and I believe in the power of effective design and instruction to transform student learning. I have seen this transformation both in the classroom as a former teacher and as a researcher during my pursuit of a PhD in Curriculum and Instruction.

Outside of work, I enjoy spending time with family, being outdoors, and reading. I love a good cup of tea, embroidery, and gardening.