Continuing from a while ago: in May I joined an Eberly Center reading group on the educational approach known as Active Learning (AL). Again, AL essentially just means replacing “passive” student behavior (sitting quietly in traditional lectures) with almost anything more “active.”
I’ve already described the first week, where we discussed the meaning of AL and evidence for its effectiveness. In the later two weeks, we explored how to implement a few specific AL styles.
My notes below go pretty far into the weeds, but some big-picture points: Spend more time on designing good questions & tasks (and perhaps less on your lecture notes). Ask students to put a stake in the ground (whether a carefully-prepared response or just a gut-instinct guess) before any time you lead a discussion, show a demo, or give a lecture. Teamwork (done well) has huge benefits, but make sure the assignments are designed to be done in teams (not stapling together individuals’ separate work), and teach teamwork as an explicit skill.
[OK, so last time I joked we should teach a course called Active Active Learning Learning, where we use AL pedagogy to learn about the stats/ML experimental design concept also called Active Learning. But the reverse would be fun too: Run a course on Design of Experiments, where all the experiments are about evaluating the effects of different AL-pedagogy techniques. That is to say, a good course project for Intro Stats or Design of Experiments could be to evaluate the study designs below and improve or extend them.]
Notes-to-self, from weeks 2 and 3, below the break:
AL Activity Table
The seminar organizers put together an Active Learning Activity Table. Covering around 15 activity ideas, this table indicates the time required, a short description, and examples for each activity.
Regarding the first few 1-minute activities, our group noted that some students won’t do it unless it’s graded. Other students want feedback on this, even if it’s not graded. So, some combination of “participation points” and feedback can motivate good responses here. At the very least, give group-level feedback at the start of the next class, to show they’re not being ignored.
These also make good “lab activities” to collect a small deliverable from everyone instead of signing an attendance sheet. Perhaps grade them on some days, and on others just record participation.
For “Muddiest Point,” you can let them instead respond to: “What else do you want to know?” (As a student, if nothing was muddy to you that day, you’re encouraged to think about going beyond the basics.) Also, if students bring up tangential “muddy points” that you weren’t planning to cover, these might be useful topics for student projects later in the course.
A few activities we could add to this table:
- Writing exam questions. This is a great review session activity. Students will tend to work hard on this, since it’s in their interest to write a question you might actually use.
- Act One, in Dan Meyer’s Three Acts setup. Give them a clear hook, a scenario that invites a prediction or estimate. Ask students for their best guess, but also for sanity-check ranges (what answer would definitely be too low? too high?), which we can use after doing the calculations.
- In our group, people also tossed out activity names like “POE: Predict, Observe, Explain” and “Problem-Based Learning” which we didn’t get to explore in detail.
We spent the rest of this week covering some specific AL activities in more detail.
As described in Ruhl et al. (1987), this is incredibly simple: just pause three times during your lecture, for 2-minute pauses each. There’s no prep required, other than maybe a reminder in your lecture notes that this is a good moment for a break. It’s not a substantial cut from class time, even in a 45 minute lecture. In this study, “During the pause, subjects formed dyads and discussed lecture content (e.g., asked each other for clarification of concepts or caught up on notes).”
Despite its simplicity, the pause procedure made substantial improvements in student outcomes during this study. It seems to give students a chance to organize, store, & assimilate information, both in notes and in long-term memory. They can also clarify points of confusion with a classmate, without fear of asking a “stupid question” in front of the whole class. I suspect it is also especially helpful for students whose first language is not English (or whatever the local language of instruction is). If native speakers need a break to digest the firehose of information from a lecture, that need is even more acute for non-native speakers.
However, our group wondered if this study from 1987 would change today, with smartphones and laptops and social media. Would the pause have no effect, because students would just use it to check Facebook? Or would it help, because they’re probably checking their phones throughout lecture anyway, and this might encourage them to wait until you stop talking? Or would it actively harm, because those who weren’t already checking phones would do it during the pause, and the context-switching will throw them off track from the lecture?
If nothing else, that question itself (how to evaluate the pause procedure in modern classrooms?) could be an interesting study-design to discuss in Stat 101 courses. Get students to think about how they’d measure the effects, monitor attention during class, enforce technology bans if they want to evaluate them, etc…
A humanities example
Most AL studies seem to be in STEM courses. The seminar organizers chose Tinkle et al. (2013) as an example of AL in the humanities classroom. I have to say, I had a lot of concerns about this article’s study design and conclusions… But let’s not rant here, let’s focus on the good nuggets we can draw from the article.
As I read it: The authors found that students barely improved over the course of a semester when they got no feedback on (long-form writing) quizzes. However, the following year, instructors actually showed students the grading rubric, and performance improved much more substantially. In other words, it helps when students know what you’re asking of them.
Other AL tasks used: Students wrote reading-quiz questions, and each discussion section was responsible for composing an entire quiz on one of the readings. Also, during lectures, instructors paused regularly for small-group discussions (how to interpret a passage from the reading in light of this lecture, and how to justify that interpretation?). Finally, breaking the class into discussion sections allowed for discussion in larger groups.
And as we said last time, literature & many other humanities courses are already naturally taught via Active Learning. How else would you teach literature than by having students read in advance, then come to class and discuss?
I love the idea of classroom demonstrations, like beakers fizzing or things exploding in physics and chemistry class. Even in statistics, we can do some demos by running computer simulations live, rolling dice, etc. However, Crouch et al. (2004) found that there’s no substantive improvement from passively watching a demo, compared to never seeing that demo. On the other hand, if you merely ask students to predict the outcome before running the demo, that’s enough to produce learning gains. It helps even more if you ask students to discuss with a neighbor after seeing the demo, before hearing the instructor’s explanation (as opposed to seeing the demo and hearing it explained immediately).
So try to get students engaged, thinking critically about the subject, before you launch into the demo or explanation. If nothing else, once people have made a prediction or bet, they seem quite naturally motivated to watch carefully and see if they were right. If they were wrong, having made the explicit bet, they might also be more motivated to think about why it was wrong (so they can bet correctly next time).
It’s such a simple hack (and cheap too, adding only 2 minutes of class time to the demos in this study). But too few instructors think to do it.
One concern our group raised is that sometimes you simply can’t ask for a prediction before the demo. In a psychology course, the “demo” might actually be a mini-experiment on the students sitting there in the lecture. If you ask them to predict the outcome first, they’ll be aware they’re about to be in an experiment, and the outcome will change. I wonder if you could do something like K-fold cross-validation? Each lecture, a different subset of students will be informed about the experiment’s purpose in advance, so they can try to predict the result; their remaining classmates will be the experimental subjects during lecture. Each week, rotate the subsets so that everyone gets to predict sometimes and be a subject at other times.
Inquiry-Based Learning (IBL)
We read Laursen et al. (2014), but this article was actually more about the outcomes, not the implementation of Inquiry-Based Learning (IBL). Here’s one site that explains this AL approach. I also like Dave Richeson’s description:
For the last 10+ years I’ve taught topology using a modified Moore method, also known as inquiry-based learning (IBL). The students are given the skeleton of a textbook; then they must prove all the theorems and solve all of the problems. They are forbidden from looking at outside sources. The class types up their work as they go. At the end of the semester they have a textbook that they wrote. It is a great way to learn, and at the end of the semester the student are thrilled to hold a bound copy of the textbook that they created.
A while back I asked about using this method for teaching statistics, and commenters had some good suggestions.
Back to our seminar: The Laursen article focused on benefits & effects of IBL. The article’s authors point out how IBL erases the gender gap, compared to female students’ worse learning outcomes in traditional courses. But as they say, “IBL methods do not ‘fix’ women but fix an inequitable course.” It’s not true that “girls suck at math” as XKCD jokes, but this stereotype causes women to experience stereotype threat1 in math class, and this seems more severe in traditional-lecture courses than in IBL. Furthermore, math is hard for everyone in traditional-lecture courses, compared to IBL where collaboration and deep engagement are encouraged. In fact, male students’ performance also seems to improve in IBL courses.
The article authors “identify twin pillars that support student learning in IBL classes: deep engagement with meaningful mathematics and collaborative processing of mathematical ideas.” There’s an immediate link between the ideas discussed during class and the homework time spent to refine or build them. Collaboration and peer critique help students see that math is “not one way only” and understand that everyone struggles—success comes from effort, not innate talent alone.
In other words, IBL seems to prevent students from becoming disengaged and discouraged, redirecting their thinking from “I must suck at this, and what’s the point anyway?” to “Everyone struggles with this, and I see the value of that work.” This decouples the difficulty of the topic itself from difficulties caused by traditional-lecture course design, perhaps helping to alleviate stereotype threat this way.
Unlike small AL tweaks such as the pause procedure, using IBL is a large commitment. You have to revise the entire course from scratch (although you could start with IBL course notes published by the Journal of IBL in Mathematics). But the benefits seems worth the effort.
Team-Based Learning (TBL)
We read Michaelsen & Sweet (2008) on Team-Based Learning (TBL). This is not just general advice on teamwork, but a concrete, regimented structure for teaching entire courses. Like IBL above, it would take a major rewrite of your entire course materials to get started.
In TBL, you assign students to permanent small groups for the whole semester. For each major chunk of the course, you repeat the following 3-part structure:
- Preparatory reading / study, outside of class time.
- In-class “Readiness Assurance Process” (RAP), around 1 hour, which itself has four parts:
- Each student takes an individual test on the readings. Tests are multiple-choice, “focus on foundational concepts, not picky details, and [are] difficult enough to stimulate team discussion.”
- Before getting feedback, each team retakes that same test. Team members must reach consensus on each answer. The article’s authors recommend using a scratch-off answer sheet, for the sake of immediate feedback and point-scoring. For each question, the team agrees on an answer and scratches it off to see if they are right. If the first answer’s right, they get full credit. If not, they can keep scratching off answers until they find the right one, but they get fewer points if more answers were scratched off.
(In other words, you can still get partial credit after a mistake, but you’re strongly motivated to aim for the right answer first. And teams are motivated to discuss and reach agreement. “‘Pushy’ members are only one scratch away from embarrassing themselves, and quiet members are one scratch away from being validated as a valuable source of information…”)
- Student teams write an appeal to challenge any questions they missed (or any confusion caused by the answers or the preclass readings). This part is “open book” and students are asked to build a strong case with compelling evidence for their view; “students learn more from appealing answers they got wrong than from confirming the answers they got right.”
- Finally, the instructor gives a brief lecture as feedback on the tests and appeals. It’s not a prepared, expository lecture, but rather a direct response to any challenges or confusion raised by students.
- In-class application-oriented activities, around 1-4 hours. Like the RAP tests, these call for student teams to commit to a specific choice/decision (and a justification), though they needn’t be multiple choice. These should be much deeper, more complex questions that require much discussion and analysis.
Some of the article’s example questions: Given this dataset, which of these claims is most supportable, and why? What is the most dangerous aspect of this bridge design, and why?
Each group should work on the same problem (to allow later class-wide discussion between groups too, not just within). Finally, the groups should simultaneously report their answers before a class-wide discussion. This holds each team accountable for their choice, rather than letting everyone default to agreement with whatever team went first.
This is a rather complex structure, so you’ll need to spend some time explaining it on the first day. Also, students’ grades will be based on all four deliverables: individual quizzes, team quizzes, team appeals, and team activities. There should also be some peer assessment, where team members evaluate each other’s contributions (do they prepare in advance, do they attend team meetings, are their interactions valuable, etc). You could even have student teams discuss & decide how to set grade weights for the course, as a first-day introductory activity.
At first glance, I’m a bit wary: this all feels like a canned routine. But maybe it’s not bad to have such a consistent structure. Plus, the structure really forces student engagement and limits lecturing. One major takeaway from this AL seminar has been to focus on preparing good questions and tasks, not just good lectures… and TBL just takes this view to the extreme. Although it would take a long time to prepare really good questions for the in-class RAP quizzes and group activities, the rest of the course should just fall into place automatically.
To write the assignments, you should work backwards from the learning objectives. What do you want students to be able to do? Imagine you’re working with a junior colleague and you can tell that they really know this material: what are they doing that demonstrates this? Take that imaginary scenario, and use it to design activities that would give evidence of student learning. (You’ll also have to evaluate the students’ justifications for their decisions. So think: “What criteria separate a well-made decision from a poorly made decision using this knowledge?”)
I also really liked the authors’ justification for using team activities with simple deliverables. If you ask for a clear-cut decision, teams spend their time debating the question and the course content. If instead you ask for a lengthy report, team discussions tend to focus on how to split up the work, and the report is likely to be several individual parts pasted together. So, by asking for a single decision (perhaps with concise justification/evidence), the activity really does encourage development as a team.
However, properly forming and managing groups is its own challenge (see next reading too). The authors suggest keeping the same groups all semester, since it takes time for any group to meld into a cohesive team. It’ll also encourage accountability for the preclass preparation, knowing the same teammates will be mad if you consistently drop the ball. You should also assign an activity explicitly about group-process issues: teammates reflect on how their interactions changed over the semester and what changes helped the team cohere.
One last major concern: the authors suggest keeping public “team folders” that contain not just team grades, but also each member’s attendance and individual test scores. Sure, this “fosters norms favoring individual preparation and regular attendance”—but is it even legal under FERPA to show students each other’s grades? (Maybe they have students sign waivers?) Also, I can imagine insecure or privacy-valuing students would just drop your class on day one.
Still, TBL should be quite useful with a few minor tweaks. Finally, I love this quote from William Sparke(?): “teaching consists of causing people to go into situations from which they cannot escape except by thinking.”
General advice on team projects
Instead of the specific TBL course format, you might choose to use a few group projects more loosely. In this general setting, we read Finelli et al. (2011) for advice on setting up the teams, expectations, and assignments carefully. They also report that (effective) team projects seem to have strong positive impacts on women and minority students, as we saw with IBL above.
First, writing good team assignments takes careful planning. Like with TBL, the authors here recommend starting with “simple, well-defined tasks,” though you can ramp up complexity after a while. Examples: compare several options, complete a table of definitions, verify where a rule was applied correctly.
Clarify individual vs team accountability, and help students define their roles and division-of-labor. Over the course of a semester, you could have students rotate through specified roles, for example: scribe, time-keeper, clarifier, reporter, manager. Rotating the leadership role is especially helpful.
If you assign larger complex deliverables like reports or presentations, be clear how both individuals and the team will be assessed. Instead of merely saying “Research X, then give a presentation,” it helps to scaffold the work a bit. Break down the assignment into sensible chunks, for instance: each member writes a 2-page section on one of the following sub-topics, and the final report should include all these sections plus a cohesive introduction. Each team member should contribute to preparing the presentation, and one member will be chosen randomly to present. Final team-level grades will depend on both the report and the talk; individual grades will be adjusted based on the individual report-sections.
Apart from the content-based assignments, you must also think about forming and managing teams. The authors recommend teams of 3-5 students, where the members are diverse both demographically and “functionally” (“how people represent problems and how they go about solving them”). For women and minority students, avoid isolating them: instead of one-on-a-team, it’s better for each team to have either 0 or 2+ women/minorities. Instructor-formed teams are more likely to be well balanced & diverse. Be sure to take student schedules into account too, so that no team ends up with zero possible meeting times.
Teach teamwork skills explicitly. (This is definitely something I need to practice myself.) Some ideas: Ask each member to take a learning-styles quiz, then have the team write a reflection together on how they might use any differences to their advantage. Have teams write their own list of strategies for successful teamwork. Ask teams to write their own contracts or charters, including a team mission, member roles, conduct norms, and how they plan to address conflicts. Finally, as the instructor, observe team dynamics as they work and try to guide them.
Finally, assess the teams themselves (not just the content-based assignments). Ask individuals to reflect on what the team does well and what they’d like to change. Use peer evaluations (there’s one in this article, and another online called CATME). Do both of these throughout the course, not just at the end, so you can step in to address problems early or give class-wide feedback on common problems instead of just using it for grading.
I found this paper’s advice and resources to be helpful. But I would also love to see more advice on managing ungraded or unpaid teams. When I advised a research team of undergrad students this summer, they were not graded (and I’m not senior enough to be asked for recommendation letters). And when I’ve managed a team of volunteer statisticians, there was no pay or anything else I could use to enforce deadlines. Without any real “power” or incentives (beyond the pleasure of a job done well), what else can I do to help manage conflicts or poor performance?
Just-in-Time Teaching (JiTT)
Here’s one more big, formal AL activity with an acronym name. As introduced in Novak & Patterson (2010), Chapter 1 of Just in Time Teaching, JiTT should encourage students to do pre-class readings, reflect on them, and come to class with questions.
The basic structure: For each outside-of-class reading assignment, also assign around 3 short but open-ended questions. Students’ responses should be due online a few hours before class. In those few hours, the instructor reads the student responses; picks a representative sample to present (anonymously) in class; and uses the responses as a starting point for in-class activities or discussions. Meanwhile, points raised during in-class discussions become the seeds of future pre-class questions.
Using this tight feedback loop, the students & instructor “collectively guide the construction of new knowledge.”
The authors also point out: “Student participation will be enhanced if students come to class with informed responses that they are eager to defend.” TBL (also Dan Meyer’s Three Acts) put more emphasis on committing to an answer (even if just a guess) during class time and defending it, while JiTT allows more time to compose a prepared response outside of class.
I admit this process sounds like a lot of last-minute work, especially if your class is large or meets many times a week. Personally I would feel more comfortable if I can prepare further in advance. Is it OK to make the reading-responses due sooner than just “a few hours before class”? But I do admit that the students who’d respond far in advance are the better-organized ones, not the ones who need my help & feedback most. And the “just-in-time” aspect does make the material fresh in students’ minds when they attend lecture. Finally, the authors say you’ll have a pretty good sense of what responses to expect after teaching the course just once.
In the classroom itself, show student responses at the start of class. See the article’s Table 1.2 for advice (show good answers too, not just weak ones; vary the authors shown across the term; etc.) and suggested follow-up questions (how could we add to this response? what part of this is correct/incorrect? what are the unstated assumptions here? etc.). Hopefully after teaching a JiTT course once or twice, you’ll have a few activities “ready to go” that you can just choose from after seeing student responses: a mini-lecture, a demo, a team activity, etc.
Again, as I said with TBL, the hardest part is probably in creating good questions. The JiTT Digital Library has some ideas to get you started. As the authors say, “effective JiTT questions:
- yield a rich set of student responses for classroom discussion.
- encourage students to examine prior knowledge and experience.
- require an answer that cannot easily be looked up.
- require that students formulate a response, including the underlying concepts, in their own words.
- contain enough ambiguity to require the student to supply some additional information not explicitly given in the question…”
Have a clear learning objective and motivation for each question you pose: to prepare for discussion? apply a concept? reconstruct ideas in the student’s own mind? build curiosity? practice metacognition and reflect on the student’s own learning?
You might also ask “After completing this exercise, what ideas are still unclear to you?” after each assignment. Besides being useful feedback for you, these could be useful discussion fodder to include in your class slides.
Apparently, students may “view JiTT as simply shifting the burden of teaching from the instructor to the student” if you don’t explain the idea carefully and motivate each question well (not just busy-work). It also helps to tell students about formative vs summative assessment, and how JiTT questions are “nonjudgmental diagnostic tools,” a chance to practice critical thinking and connecting ideas.
Still, you probably want to include some grading of the JiTT responses to motivate participation. The authors recommend having it add up to 5-10% of the course final grade. Remember that JiTT questions are on topics not yet covered in class, so grade on effort more than correctness, e.g. by using their rubric in Table 1.3. Clear grading policies and clear submission deadlines are necessary.
Finally, I’m glad that the authors admit this is a big shift for many instructors, and they suggest “background reading on transitions in professional practices.” See for instance the “reflective practitioner model” (page 73): Most of the time we follow a usual routine. A surprise (high failure rates in a class?) might draw our attention, leading us to reflect on what happened. Then we experiment with a new strategy, which may eventually become our new routine.
Technology for pre-class preparation and assessments
We returned to Bowen’s Teaching Naked (2012). Last time we read Chapter 8, on how to use your precious, limited, face-to-face class time. This week, we read Chapter 7: “Technology for Assessment,” on the fancy digital tools you can use for out-of-class assignments, to prepare for course meetings or assess student learning.
There are also good points about how the omnipresent Internet is changing education and assessment, and how to work with it rather than against it.
It now requires serious thought to craft a test or assignment that prevents the use of the Internet. Why bother? The real world is ‘open book.’ … Instead of asking students for the steps of the Krebs cycle, ask what consequences arise from the reuse of oxaloacetate as a starting material in the citric acid cycle? Ask students the second question, and allow them to Google the first one.
Of course, some basic information simply must be instantly-available and are worth testing on closed-book exams (e.g., in medical school, for the things a doctor must know instinctively). But most of the time, it’s better to ask students to evaluate evidence, make connections, and otherwise demonstrate judgment.
Furthermore, we ought to directly teach how to evaluate information (and its source), just as “we used to show students around the library” (man, it’s depressing to see that written in past tense…) What’s the difference between citing Wikipedia, vs using Wikipedia to help find original sources to cite? What to do if Google and Bing give strongly-different results? When to ask a forum of anonymous internet users, vs an expert opinion? What makes a self-professed expert reliable?
If faculty believe collaboration is wrong, then we must design courses and assignments that demonstrate both the power and the pitfalls of collaborative information. Determining who is an expert, what skills will be most useful in the future, which information is relevant, and if and when collaboration is better than expertise needs to become a central part of course design and assessment.
Try other evaluations besides exams. Can your students demonstrate mastery in some other way, such as the portfolios made by students in art/design schools, or bedside rounds of medical students, or live debates by law students? When you do write exams, consider making them open-book and untimed, then require analysis instead of fact-reporting. (Also, do check what happens if your questions are Googled, even if you want it to be closed-book!)
OK. On to the specific assessment ideas:
To encourage pre-class readings (and reflection), you might use JiTT as above, or you might just ask multiple-choice questions. Blackboard and other Learning Management Systems (LMS) make this a simple, easily-graded tool to keep in your arsenal. But take the time to write good questions! Keep language simple. Avoid negative answers (X is not Y), since they can cause confusion that’s distinct from the content-knowledge you’re trying to measure. Give plausible distractor answers of similar length. Don’t ask things that can be Googled.
Instead of asking for the single correct answer, consider asking for several best answers, for instance: “The following are all true statements about X. Which of them are most relevant to why Y? Check all that apply.” This encourages careful, slow reading (especially if it’s not a timed quiz) and teaches “students how to consider all of the evidence before they determine which is most important.” Later in the course, you can also build on this kind of question with “Which of the following statements (all true) would be best evidence to support the claim that Y?” This asks students to weigh evidence, whether or not Y is actually true. You can also follow up with, “Which of the following… would be best evidence to refute the claim that Y?” Instead of reciting facts, evaluate what you can do with those facts.
If you do ask questions with debatable answers, provide a forum for that debate. “I tell students we can argue about every question, but I do not change the acceptable answers until the following year.”
For something more involved that multiple-choice and JiTT, you can always assign a writing task. Be clear about the expectations, the organization of the paper, the kind of evidence that should be used, the “customs of your discipline”… Give students a detailed rubric that makes your priorities clear in advance.
Some writing prompts: What does the text say? (Don’t just parrot back facts, but e.g. “What might a Martian not understand about this?”) How do you & others interpret the text? (You personally; the author; two readers from these respective backgrounds; etc.) How do you understand this text? (Not “Did you like Hamlet?” but specifically “When is Hamlet most sympathetic and why?”) Why is this text/passage important? What causes this text to convey its message well/poorly? What disturbs you about this reading?
Evaluation of writing is still “one of the least scalable tasks in teaching.” Peer review of writing is one way to help with this, and besides, evaluating others’ writing is a useful skill to practice. The author also argues that students will work harder if they’re being judged by peers than by the instructor. You may need to give student reviewers a more-explicit rubric than the one you’d use yourself. If you have a large class and/or plan to do many peer reviews, you might try a tool like Calibrated Peer Review: you set up some practice essays that you’ve already graded; students grade these model essays; and they get calibrated based on how closely their grades match the ones you gave. Students who “grade more like you do” are given more weight when grading their peers.
The author has a section on “[computer] games for assessment.” I’m wary that the time spent polishing such a game might be better spent developing the course in other ways. But there are some good points, like about his tests where students “recognize musical styles by identifying random audio clips… When a technical support person cautioned that a student could cheat by memorizing all of the 150 music examples, I realized that memorizing was not cheating but was actually promoting the learning that I hoped they would achieve. That insight led me to move … into a gaming format where students move up levels as they master genres, composers, or performers.”
Besides, “games are challenges whereas exams are just scary.” The author describes something a bit like the “specs-based grading” that I used in teaching my dataviz course. Instead of scoring points on a one-time exam, students play the 10-level game as many times as needed to show mastery, knowing they have to get to Level 9 to earn an A but are free to stop earlier. Unfortunately I don’t really know of good game ideas like this for teaching Statistics.
Several of these activities (IBL, TBL, possibly JiTT) seem to call for a thorough rewrite of the course. I can imagine doing that with courses I’ve already taught (Experimental Design, Statistical Graphics and Visualization)… but I may not be ready to dive in when teaching a new course from scratch.
Still, when I’ve taught new courses in the past, I already tend to spend a lot of effort. I repurpose a previous instructor’s materials and rewrite the lecture notes extensively. Now, after taking this Active Learning seminar, I’m reconsidering my strategy!
It might be better to spend that energy on writing new assignments: reading checks (open-ended like JiTT or make-a-decision like TBL, even if I don’t fully use those course-formats) and longer in-class activities. Then I should just reuse old lecture notes directly, but pause lecture often for short breaks or think-pair-share discussions—that’s where the real learning happens anyway, not in what I say. Or, I can make the lecture-notes part of required reading and hardly lecture at all, if I come up with enough in-class activities instead.
Besides, if I create a good question-bank (discussions, quizzes, tests, activities), this will likely be more transferable—more useful to other instructors than a good set of lecture notes. There are plenty of “good enough” expository notes out there on every topic. It’s harder to find good-enough “situations from which [students] cannot escape except by thinking,” as the TBL article put it.
- Stereotype threat: People in a negatively-stereotyped group can be strongly affected by the fear of conforming to that stereotype, wasting cognitive energy on trying to overcome the stereotype instead of just focusing on the task. I strongly recommend reading Whistling Vivaldi for more on this topic.