Field of Science

Showing posts with label Education. Show all posts
Showing posts with label Education. Show all posts

Student Editorial and Missing the Point of College

Today represents an awakening for me. I was browsing the student newspaper and came across the following editorial: 'Professors owe us study guides'. My first thought was 'what the hell!', but I quickly remembered that titles are generally not written by the authors. So I decided to read the thing, I suggest you do the same. I'll wait. Sorry if you're now thinking Trump is a good choice, not my fault it was the editorial writer's.

The author starts out reasonably enough "With midterms in full swing, many of us at the University of Minnesota have been studying diligently to prepare for our exams. As these assessments often represent a large percentage of our grades, it’s very important to do well on them." It's good to hear that you are studying diligently, seems appropriate being that you're an adult in college and all. But there's already a whiff of something problematic, the focus on the grade. Yes grades are important and you should want to do well on your work in order to obtain good grades. However, grades in and of themselves are a means to an end, they are not the end.

The problem is manifest in the third and fourth sentences, which encompass the entire second paragraph. "But there’s nothing more frustrating than trying to study every little tidbit of information you’ve learned since the start of the semester. It simply isn’t going to happen, especially because students often have more than one midterm exam.
" There is so much fail embodied in this, I can't even. First, if you are studying every little tidbit of information, then you probably never spent any time during your education to think about teachers and professors and tests, quizzes, exams to realize that people have personalities and to use this information to strategically study. Is something emphasized over and over (probably a good idea to study it), was something highlighted as important, meaningful, or a key point (probably a good idea to study it). Was something mentioned once as a brief aside, maybe in response to a question (probably not worth spending much if any time studying). As this is spring, almost certainly this semester represents at least the second semester the editorial writer has been at the University of Minnesota (there's a small possibility they just transferred in). Second, we actually know that students have mid-terms around the same time. It's weird, because at the end of a semester there's this thing called 'finals week,' where all the professors schedule exams over a few days, but none of us know the others do the same thing. Third, I hate to tell you this but it does happen. Maybe not with you, but many students do learn a lot of material. I know I've given out the A's to prove it. I've written letters of recommendation for students with 3.85 GPAs. Just because you cannot be bothered or feel like it's too much of a burden, well WAH. But here this is just for you:

Fourth, you need to realize that it is a University of Minnesota policy that on average a student should expect to spend three hours a week per credit of class. So a three credit class means ~ 9 hours of week of in class and out of class work. Maybe some weeks you work 5 hours, exam shows up and you work 10 hours, your average is still less than 9 hours a week. And to be clear that three hour average/credit is for the average student to receive an average grade aka a C. You want a better grade, you should expect to work more hours on average. (Now might be a good time to wash your hair again.)


For full disclosure, I know many classes expect much less work than this. But these classes are in fact screwing you over. My primary class, which I uniformly have received excellent student evaluations for. students note how much work the class is and ask that the credit load be increased because it's so much work. However, most students that note the work also fill in the 6 - 9 hours/week  on the scantron (for my 3 credit class). Realize that a full credit load of 15 credits, represents slightly more than a full-time job commitment (45 hours)!


Ok that deals with the generic dome-a-dozen complaint about studying (you're not in high school anymore). This really isn't more than what is complained about every single year. My problem is with what comes afterwards "
Study guides help students to focus their efforts and weed out concepts that won’t be on the exam. This can save a huge amount of time. Not to mention, by ignoring the concepts that won’t be tested, we can spend more time on those we’re expected to know, increasing our understanding of the material and, hopefully, our test scores."

The arrogance of those twisted sentences represents much that is wrong with the approach students have towards a college education. Some, like this letter writer, consider college an extension of high school. It's something they have to do and is mostly a waste of time. I thought much of high school was a waste of time, but that was because I didn't know any better. The difference is that I was required to get an education through the 12th grade, or at least it was a serious issue to drop-out and not something that could be simply accomplished. Not so, with college. There's no law requiring post-K-12 education. I realize to get many jobs a college education is required even if in reality it isn't. But you know what, many jobs, including good jobs, do not require a four year degree.


Let me explain something to our entitled editorial writer and those who agree with them. What we teach in class is important or at least it is considered important by an expert or experts in the field. Courses have to be approved by a committee based on what the learning objectives are and how they will be presented and how they will be assessed. To adequately test someone on their grasp of the material, it usually means your are asking focused questions on specific areas. Because something isn't on the test does NOT mean it is not important to know or needed for subsequent classes. You are not in college to take the fucking tests and I apologize that the ACT/SAT companies, politicians, and standardized testing has you thinking it is. You are correct, if we tell you what to study so you can vomit it back to us, you will get better grades more easily. That isn't the fucking point. Also, how can you suggest that if you spend more time studying fewer things, you will increase your understanding of the material. No, you will not, you will only know a part of the material well and even then probably only well enough to recognize words on a multiple choice exam.


"Not all professors allow students the luxury of having study guides, however. As someone who cares tremendously about my grades, there’s nothing more frustrating than having to study extra information that’s not part of an assessment. The less information I have to worry about knowing, the better I’ll learn it and be able to recall it in the future." See you should not be at a university. I'm sure you're smart, at least from a standardized testing every child in the community regardless of proficiency goes high school standpoint. But you are missing the point (that sound you just heard, that was the point flying by). The point of college is not your grade. Sorry, it's true, time to grow up. You care tremendously about your grade, no mention of knowledge or understanding. Simply your grade. Truly the writer makes me weep and gnash my teeth simultaneously.


I especially love this thought: the less I have to learn, the better I'll be able to learn it. Well no shit Sherlock, no fucking shit. It's kind of like asking for a raise and justifying it by saying I can buy more stuff. Also, if recall is your goal, you're doing it wrong. Life is not, I repeat (because it might be on the exam) not a fucking multiple choice test.

Finally we end with, "For these reasons, I think it’s incredibly important for the University to require professors to provide students with comprehensive study guides. Each individual college could delegate specific guidelines, but the policy should encompass the entire University." Well letter writer let me end with a comprehensive guideline for my upcoming exam: T, F, T, T, B, C, C, A, E, D, D, and more than 14.

First Class in the Books

The state fair is over indicating the end of summer and the beginning of a new semester. I taught my first class today, which of course means I basically met the students and introduced them to the course. In other words, we went over the syllabus…kind of. The course I am talking about is Eukaryotic Microbiology, an upper division course that focuses/uses the primary literature to teach students about eukaryotic microbes, scientific thinking, argument, etc.

The third slide in this lecture is the following (from here with slight modification)
This slide gets used throughout the course but I use it in the Introduction lecture to highlight how little almost all students are familiarized with the diversity in the eukarya. Basically students are familiar with green plants at 12:05, fungi (except for the microsporidia) at 3:15, and the animals, including sponges, at 4:00. Other than the Opisthokonts (in blue) and a minor fraction of Archeaoplastids (in green), the vast vast diversity of the eukaryotic lineages are basically ignored in biology courses. Admittedly there is lip service played to Plasmodium falciparum (the primary agent of malaria) over in the Alveolates. But just look at how little is brought up! Of the eight major eukaryotic lineages, only two are routinely discussed, think of all the biology out there we know so little about! This, in my opinion, is incredibly exciting.

Aspects of this problem were recently brought up by Larry Moran and PZ Myers (by way of   Jeffrey Ross-Ibarra). Again all that diversity noted above falls into the choice C.

Now that I hopefully have instilled some small sense of awe or at least lighted a candle of interest in my students, we deal with the syllabus and some course specific issues. I do want to point out this course is writing intensive, which means a bunch of things but basically we do a fair amount of writing (surprising huh?).

There are two things we did today I want to mention. First, I asked them what their goals are in relation to the course. (Other than getting an A.) So I had them spend a couple of minutes writing down their thoughts and then we discussed them. This represents one easy way to get the students talking in a relatively stress free environment. Open discussions are an integral part of the course and the sooner I get students comfortable speaking up the better. My goals were: 1, to give the students a broad sense of that importance of eukaryotic microbiology; 2, to increase their fluency with the scientific literature; 3, to hone their critical thinking skills. I won't divulge the students' goals.

Second, we discussed plagiarism as it is a writing intensive course. I have found that students know what plagiarism is, but if you ask 20 students for a definition, you'll get 12 - 15 different variants. I also have the students write down what they think the consequence for plagiarism should be. This leads to yet another relatively stress free discussion and serves develop a sense of student ownership for the course. Once the discussion is complete, we agree to a definition and consequences that is posted onto the course website. This year we came up with:
The 4161W class of 2014 has agreed to define plagiarism as not giving credit for others' work, including words and ideas, that is not common knowledge.
and the penalty:
Students who are found to have plagiarized will receive an F on the assignment and be reported to the Office for Student Conduct and Academic Integrity for the 1st offense. A subsequent offense will result in an F for the course and another report to the Office for Student Conduct and Academic Integrity.
As normal for the first lecture, I did not get through everything. Luckily Friday allows time to finish up going over the course and to discuss our first paper. This discussion allows me to demonstrate what I expect of the students when they do presentations and gets the students started reading papers. The paper we discuss on Friday is:
Complementary adhesin function in C. albicans biofilm formation. Nobile CJ, Schneider HA, Nett JE, Sheppard DC, Filler SG, Andes DR, Mitchell AP.Curr Biol. 2008 Jul 22;18(14):1017-24. doi: 10.1016/j.cub.2008.06.034.

Response to poorly argued opinion on student evaluations

This week the campus paper published an opinion piece by Harlan Hansen, professor emeritus, College of Education and Human Development entitled 'The missing factor of course evaluation discussion' with the subtitle 'The University should use its best faculty to teach and improve others.' I have commented previously on the issue of student evaluations and release of information to prospective students. Despite a previous attempt to have student evaluations released, which failed by a large margin, the proposal will not go away. Basically, I think the administration and associated faculty who want the information released should simply mandate that the information be released and send a big 'and fuck you too' to the 90% of faculty who do not want student evaluation information released. Otherwise, it seems like we will just keep discussing and voting on it until the vote comes out the right way.

Regardless, I want to rant about this opinion piece for several reasons.


First: the first paragraph or as I like to call it, holywhatthefuck!

When I arrived as a faculty member of the University of Minnesota in 1968 I remember a publication that rated course instructors. A few years later, I believe, it suddenly ceased publication because of faculty requests, I assume. Forty-three years later, the request for that information by students is still a nagging question.
Can you find all the logical fallacies? So in 1968 there was a publication that ranked course instructors. I will accept this position at face value, but I have some questions: was this information disseminated to the student body and if so how? Was this information disseminated to the faculty as a whole and if so how? Was this information used by students to help decide which classes to take? How were course instructors rated? Did this information rate every course and every instructor, including non-faculty instructors (I'm assuming some courses/labs were taught by graduate students in 1968, though this may not be the case.)? I'm not sure if Dr. Hansen realizes this or not, but technology and dissemination of information is fundamentally different in 2014 than it was in 1968.

Then we get to the second sentence, which includes both 'I believe' and 'I assume'. I cannot help but wonder what kind of academician Dr. Hansen was. Let's accept that this publication existed and ceased publication in the early 70s. Why should we assume that it ceased suddenly (did you hear the ominous music just then?) because of faculty requests? Dr. Hansen simply assumes it. Here are several other possibilities: maybe no one went to the library to obtain this voluminous publication to help choose courses or for any other reason, maybe it was out-of-date by the time this information was published (remember this was before computers were collating all the information via scantron forms and then imported into excel spreadsheets for rapid organization), maybe the costs associated with the publication were not off-set by the usefulness of the publication. See there are three other possible reasons without even trying. Your assumption carries no weight.


Finally, we get to the last sentence, which has little to no linkage to the previous sentences. Have students been requesting this information for 43 years? Is it really still a nagging question? In 1983 there was an outcry for instructor rating information, even though that information wasn't actually collected and therefore didn't exist? A student who turned in  a paragraph like that in my classes would not fare well. But alright, let's assume an editor took out all the cohesion in the introductory paragraph setting up the issue to be addressed.


Second: unsubstantiated claims or as I like to call it, pullingshitoutofmyass, I assume.


Consider the following points made by Dr. Hansen:

"students say they want information that will lead them to more interesting and effective professors. Second, faculty who were quoted in the news minimized the students’ requests as wanting easy courses with high grades by instructors who tell good jokes."
I'm sorry but isn't an instructor who tells good jokes generally considered more interesting? Regardless, I have to concur that many, not all, students would rather have an easier course on a topic than a more difficult course on that topic. I could be wrong, but a slightly earlier opinion piece published in the campus paper seems to support my position.
"relative to my years of experience at the University, ratings of faculty instruction do not change over the years." 
I would not be surprised about this, but data please. Also, how the hell does he know? Didn't this bible of instructor ratings stop being published in the early 70s? Maybe he was department head and saw the student evaluations (when we actually had them), which would raise the question, why didn't he provide training for his ineffective faculty?
"“A” and “B” instructors have no problem sharing their ratings
Again data please. Hell, I'll even provide a data point, on my student evaluations using a 6 point scale (6 being being the top score), I fall well above 5 in almost every category every year. In those remaining categories I still fall above 5 every year. So, does this make me an 'A' or 'B' instructor? It seems like it should, and if so count me as an instructor who has a problem sharing my ratings.

Third: problem solving. 

By establishing the problem using holywhathtefuck and pullingshitoutofmyass approaches, Dr. Hansen then proceeds to assign blame. See it's not just the ineffective instructors, it's an administration problem. (Again I want to stress we have never defined effectiveness or established criteria to quantify effectiveness other than student evaluations, which are best correlated with students' expected grade.) And now we get to the solution:
"The president of the University should charge deans and department heads to put in place programs that can help all instructors improve over time."
Personally, I think these programs are useful and important. However, I wonder where the resources are going to come from for deans and department heads to do this. Programs do not come from the vault of readily available no expense resources. This solution also raises the question, why don't faculty development programs exist already? The answer is that they do, I have attended several. I wonder when Dr. Hansen retired such that he is unaware of them. Admittedly, these programs are voluntary, but they do exist.

Of course Dr. Hansen does have a remedy to this apparent lack of teaching development:
"The key factor is assigning current colleagues who have demonstrated quality teaching skills to share and demonstrate with those in need. While this may appear threatening to individuals, it establishes a community of scholars within each unit where, eventually, everyone can share positive techniques with each other."
Yes. because nothing rewards successful teaching like getting more work and responsibilities to train and manage the ineffective instructors. Don't forget, these are the same ineffective instructors who really don't want to get better as Dr. Hansen noted above when he states that faculty instruction does not change over time.
My opinion of the opinion: from here.

I am still surprised by the whole student evaluation movement. We have no good data suggesting that student evaluations gauge effective instruction, some studies do suggest this but many others do not. I have heard from colleagues that students simply want more information about a course and god forbid they go to some commercial site like rate my professor. Well I am all in favor for more information as long as that information is valid for what you are attempting to learn/show. Student evaluations do not, I repeat not, seem to correlate with effective or quality teaching, so what information are the students receiving about a specific course/instructor? The student evaluation is being changed to ask the following questions, which are relatively minor changes in wording compared to the current evaluation:
1. The instructor was well prepared for class.
2. The instructor presented the subject matter clearly.
3. The instructor provided feedback intended to improve my course performance.
4. The instructor treated me with respect.
5. I would recommend this instructor to other students.
6. I have a deeper understanding of the subject matter as a result of this course.
7. My interest in the subject matter was stimulated by this course.
8. Instructional technology employed in this course was effective.
9. The grading standards for this course were clear.
10. I would recommend this course to other students.
11. Approximately how many hours per week do you spend working on homework, readings, and projects for this course?
   • 0-2 hours per week
   • 3-5 hours per week
   • 6-9 hours per week
   • 10-14 hours per week
   • 15 or more hours per week
Of these questions, only those in blue are proposed for release to the students. Questions 7 and 10 seems to provide useful information on whether the student liked the course or not.  Question 8 is irrelevant to the discussion of instructor rating for the most part. Questions 6 and 9 may provide insight into the instructor's effectiveness and fairness. Question 11 is the great equalizer. If two sections of the same class are different here, which do you think a student would gravitate towards? This is not minimizing student concerns, I would rather take a course that required less work too other things being equal. 

You may be asking, 'Why the hell aren't questions 2, 3, and 5 being released?' Good question, as these seem key regarding a student's ability to decide which courses they want to take. Courses do not exist in a vacuum, without an instructor we might as well attend google university and write papers on vaccines and autism. Those questions are not being released because they may reflect specifically on an instructor. (Duh!)

Of course we must consider what ever will a student do without online released student evaluations? I mean what has happened over the last 43 years! If only there were some way one student could relate information to another student about a course. Some form of communication, I don't know, like texting, or tweeting, or posting to any number of social media sites, fuck maybe they could simple open their mouths and have words come out in the direction of another student's ear. If only. Sadly, I doubt our students are even aware of these modes of communication.

Poor US Education Meme Infects the Minnesota Daily

It's bad enough reading the standard misinformation regarding K-12 education in the popular press, but now it has infected our student paper too. The editorial compares the curricula of Germany with that of South Korea as educational systems that could be modeled to improve US education. But the question, the answer to which is assumed in this editorial, is, is the US education system doing poorly?
Dunces unite

Based on the popular press, you'd think US education is in complete disarray. This idea is supported by tests that compare the US to many other countries.


For example, Pearson ranks the US as 17th overall in cognitive skills and educational attainment (Finland, South Korea, Hong Kong, Japan, and Singapore rank 1-5). The US is between Belgium and Hungary and for the record Germany comes in at a devastating 15th. These rankings spanned 2006 - 2010.


Furthermore, the Programme for International Student Assessment (PISA) 2012 rankings have US 15 year olds at 17th in reading, 23rd in math, and 21st in science out of the 34 Organisation for Economic Co-operation and Development (OECD) countries. (The US ranked 36th in math of all countries/areas tested.) 

Highlander ranks higher than
US in math and kicking ass

These rankings are problematic for several reasons.

Zero Sum Games: For the US to move up in its rankings other countries must go down. As the Highlander says 'There can be only one.' Is the US education system likely to be that much stronger than education system of the United Kingdom? Germany? Japan? Canada? France? Belgium? I'm not suggesting we should not try to attain the greatest achievement possible, but don't you think these other countries want to have student success? Even if we thought of it first (we didn't), other countries would likely have noticed and followed suit.

Apples and Oranges: The US is not a monolith of education. If anything we're a monolith of stupidity. We have a decentralized education system. Each state can do what it wants, thus states like Tennessee and Louisiana, which overtly teach biblical creationism, may do poorly on science exams. Using the US as a single entity for comparison sake does reveal major shortcomings in our educational system. But it's basically worthless, unless your goal is to eliminate public education and replace it with a mechanism to move more taxpayer money into corporate hands.

From Slate
 If we look at states individually, something different emerges.

On the PISA exam, the US math average was 481, placing us at 36th of OECD. The average for the OECD countries was 494, putting the US well below average. But if we look at individual states we find that average in Massachusetts was 514, Connecticut was 506, and Florida was 467. Two states doing well above the OECD average and one state 4 points below Croatia, a country recently established from the ruins of Yugoslavia.

Similar results are seen in the science averages. US average: 497; OECD average 501; Massachusetts average 527; Connecticut average 521, and Florida average 485.

Any guesses on reading literacy? US average: 498; OECD average 496; Massachusetts average 527; Connecticut average 521, and Florida average 492.

Do you see a trend there? It looks like some states, Massachusetts, do extremely well helping promote a strong US score. Yet other states, I'm looking at you Florida, fuck it up for everyone. You'ld think the talking heads would be asking 'what's working in top performing states like MA, NH, MN, etc?' or even 'what's not working in bottom performing states like FL, MS, AL, etc?'

I bet people in Massachusetts really want to overhaul their successful education system in order to try out a new one that might improve Florida's scores.

We spend so much time disparaging teachers that many rankings done in the US use teacher tenure, teacher seniority, and charter school availability as major criteria in their evaluations of state education. Regardless of student outcomes! If a state has teacher tenure and great student achievement, should that state be dinged? For example, the American Legislative Exchange Council (not affiliated with Congress) came out with their rankings: Massachusetts received a C; Florida a B.

I suggest you go back on look at the PISA scores and then let that digest a minute. If you think the PISA test is a Muslim plot, you could also look at the USA USA USA National Assessment of Educational Progress NAEP rankings: MA, with MN and NH, first in 4th grade math; Fl 30th. MA first in 8th grade math; FL 37th. MA, first in 4th grade reading; FL 13th. MA first in 8th grade reading; FL 33rd. (Was that a complete sweep?) Yet the ALEC says FL is a clear letter grade better than MA. WTF?!?! (ALEC cares little about education and more about policy that ends public education: Vermont and Rhode Island received D+'s yet were 2nd and 6th in the country on the NAEP tests respectively; Utah B- and South Carolina C were 41st and 50th on the NAEP test respectively.)

For the record, MA and other top performing states do well across economic spectra. These results are not simply due to socio-economic differences. However, poverty clearly has a profound effect on education achievement. 

Coming full circle, it's not a zero sum game here either. For FL to improve its ranking other states have to lose positions. The point is that using the US as an education collective to compare against actual education collectives is ridiculous.

Regardless, I am tired of hearing about the travesty of the US education system, when in fact many states are doing great, but are dragged down in national surveys by poor performing states (I'm looking at you deep south). We should look at the data coming from these assessments and tests and determine what is valid (are students in Singapore better prepared for the test due to timing of the curricula? do all students go to school and are they all tested in China?). We should also celebrate our accomplishments, YAY Massachusetts, and recognize our problems, I'm looking at you Florida, I'm also looking at Minnesota which is doing great on these tests but still has a huge achievement gap.

So thanks Minnesota Daily for getting me to write this. For the record, we don't need to look to South Korea or even Germany to fix our education system. First, we have to realize the US education system does not exist, so it can not be broken and need to be fixed. Second, we only have to look at our neighboring states to see what works and what doesn't. Third, if there are applicable educational innovations developed overseas or even up North, I'm all in favor of trying them. But realize much of our system works well and let's not fuck it all up because of Florida.

Teaching Critical Thinking

One question I grapple with is 'how do we get students to ask questions about, or rather to question, peer reviewed research papers?' This is based on my experience that undergraduate students and even many introductory graduate students have difficulty grasping the concept that there may be issues or even important problems with peer reviewed research. Part of this is based on an inherent appeal to authority/self confidence issue, how could a lowly undergraduate find something problematic with a paper written by Ph.D., or equivalently trained, scientists.


Why We Care About Critical Thinking
However, the question I am grappling with is just a subset of the more important issue, how do we teach students how to ask the 'right' questions. The key here being 'right.' This is a fundamental aspect of critical thinking. Being able to identify the assumptions, biases, controls needed, discrepancies, etc. in an argument, and a peer reviewed research paper is nothing if not an argument. I find the most successful approach is to identify these, and other, points by asking questions. Again these question have to be the 'right' questions.

In my advanced undergraduate class, I can usually classify my students into 3 categories: the non-questioners, the trivial questioners, and the rare critical thinking questioner. By the end of my course, I want my students to find themselves generally in this latter category.





In the first category, the non-questioner, we find the shy students who are uncomfortable speaking up. This silence could be the result of inherent shyness, poor classroom experiences, or even cultural issues. In fact, this point of 'cultural issues' reminds me that it is important for me to remember that women and minorities are frequently ignored or blatantly omitted from discussions. In my experience, there are as many if not more women promoting the discussions in my course as men. Regardless, I try to address the issue of cultural differences early by calling on women and minorities during our 
The Shy Student
discussion sessions. Included in this category of students are those who are not confident with the material and thus do not want to speak up for fear of saying something stupid. I provide many resources and tools to help bring students up to speed if they are missing some background, so I tend to be less sympathetic with these students because they, by definition, must be aware of their deficiencies and choose not to address them. Of course, it takes awhile to separate these students from those who are shy, but it is disheartening to identify a student as being  intellectually lazy, lazy in general, or indifferent. To be clear, I have had students that lack some of the foundational material needed for my course that have worked hard to address these issues, and I help them as much as possible, having one on one meetings as much as they need to go over concepts, specific papers, etc. I love these students, because they have a drive that is infectious. Getting back to the shy students, how can I help get them engage in the course, such that they can move to category 3 and without having to change their personalities? For these students, all students actually, I have online components to the course. In addition to in class discussions, I have an online forum to initiate new discussions or continue discussions started in the classroom. This provides a place for those students who are inherently shy and students who are not comfortable thinking on their feet, which is what the classroom discussions entail. Students can use these forums to ask broad questions, initiating discussions beyond the minutiae of the papers they read. Students can also ask for help if there is something in the papers, a method, conclusion, etc. they do not understand, and students can help their colleagues by answering those requests for help. While I monitor the discussion boards, I refrain from commenting as much as possible, such that it quickly becomes a student-driven environment. While not perfect, there are mechanisms to promote moving students from category 1 to category 3.


Were you there? Only applies
to science not the New Testament.
The second category: the trivial questioner, is the place I work the most. Not that a specific student is a constant trivial questioner, but rather it is a constant place we come back to in class. This is not a problem because it does serve as a constant 'teachable moment.' The trivial questioner falls into the meme that there are no stupid questions. Of course there are stupid questions! In fact, the 'there are no stupid questions' comment is itself a stupid comment. I understand the 'there are no stupid questions' concept, but it is used with the tacit understanding that everyone is acting in good faith. This is seldom the case. For example we have Ken Hamm's 'Were you there?' question. This question is bullshit and not acting in good faith. The fact he encourages ten year olds to ask this question just serves to exemplify the moral vacuum in which Hamm resides. Hamm knows this question is a bullshit question, but it is a nice soundbite gotcha-sounding question to the masses. However, the ten year olds Hamm sends out to 'ask' questions do not know why this is bullshit, and that is his goal. However, we can use it as a teachable moment. The problem, in my opinion, is that Hamm knows many children would never ask the question, but will think the question and then answer it for themselves. In my course, there are many 'were you there?' type questions. Not necessarily from the Hamm perspective, but from the 10 year old perspective of 'this sounds good, I'll go with it' perspective.' These kind of questions are particularly present at the beginning of the course and I like to think I help move the students into the 3rd category. It's possible that I push these students into the 1st category, but I doubt that based on the quality of the discussions as the semester progresses. By way of example, every year when discussing a paper using a mouse model, the question will arise 'well I am concerned that the study only used female mice and I wonder what the data would be if male mice were used?'  This kind of question is relatively easy to come up with because we teach students 'black and white' thinking, everything is a binary decision. So when the student reads the methods and materials and sees '20 female C57/B6 mice were....' the student immediately thinks 'male' or vice versa in a 'tell me the word that pop into your head when I say...' kind of way.

So how do I encourage questions/comments of these 2nd category students without pushing them into the 1st category? What I have found works is to mimic Socrates, I ask questions. For example...
I am concerned that the study only used female mice and I wonder what the data would be if male mice were used?
Why do you think this might make a difference? I agree that there are important differences between females and males, I'm wondering how you think these differences apply to this study?
Well there could be differences due to hormones or something...
That's a great point, because that is clearly the case in certain instances like Paracoccidiodes infections. Is there anything in this system that makes you think there is would be a sex-based difference?
.....
Ok that's a good point at face value, but maybe needs further consideration, did anyone have additional issues with this study?
The point is to encourage/require the students to have a scientific justification for their concerns, questions, critiques. This, in my opinion, is the most difficult thing the students can learn and that I can teach. The point is that I need to teach the students how to question the studies, but also to question their own questions/concerns. But, I want to emphasize that it is ok to be wrong! We talk about well conducted studies that have generally solid conclusions and identify potential concerns. These concerns may not change the overall conclusions, but do raise concerns with sub-conclusions that may not be valid. In fact, the introductory paper we are discussing is the one I railed on previously. We will also be considering the press release. This is a change from the last couple of years when we discussed a well written, described, and assessed paper. I'm interested in how this approach works.


Own this book!!! 




Finally, we come to the third category: the critical thinker questioners. For these students, I can only refine their skills, improve their writing, and expose them to new and interesting areas. Almost uniformly, these students have research experience and likely significant experience. However, these students almost certainly have holes based on the areas they have been exposed to. These students know the potential issues in the area they are familiar with, but lack the similar approach/mindset in the areas they are not familiar with. This is one of the reasons it is essential for scientists to read well outside their fields. Breadth of knowledge promotes a better assessment for how your studies fit into the broader world of science. This increases the impact of your research not only from the study in question, but also in the questions you ask in the first place.

FYI. While I have categorized student comments/questions into 3 groups, no one student (nor the instructor) fits into a given group. Furthermore, I love my course because I learn so much from the students, even those that tend to cluster in a specific category. The lessons I learn may vary, but I learn important concepts, holes, insights from all three groups.  I thank the students from previous years for helping me develop these insights and to improve my courses.

Grades: Do They Mean Anything Anymore?

Alright. I'll do it, but I'm not happy about it.....ok, I'm a little happy about it. I mean I'm not happy to have to call out my profession, but I am a little happy to cause some well deserved uncomfortableness. Look, I have a battery of K-12 teacher slips noting my problems with authority and the status quo. I have also realized that I am happiest when I question my friends, disagree with them (even when I agree with them), and push others to justify their positions. (I may piss off a lot of people, but fuck 'em. I mean they should be secure in their positions if they are sound. Plus, I provide tasty malt beverages ie the great equalizer.)

So without further psychological adieu, I want to discuss grades at the collegiate level. As I noted in a previous post, this is something that appeared on my radar screen that deserves attention. I am going to focus on my institution but this is not unique to it, nor is it unique to public colleges. This is a systemic problem that must be addressed or if not addressed maybe it's time we rethink our mission statement. For example, a recent study looking a the % of grades distributed by private and public colleges over time showed the following:


Rojstaczer & Healy 2010
In 1960, both private and public schools were indistinguishable regarding grade distribution. Furthermore, this distribution shows a predominance of C, with slightly less Bs and few As. There were even fewer Ds and Fs in relation to As, but there is a clear bell curve distribution of grades. I have to admit this makes sense to me. I would not expect a preponderance of Ds and Fs at the college level in 1960. High school graduates were not expected to go to college, so there was a bias to those who did. If you applied and were accepted into college, it is doubtful you were unprepared or not able to handle the work. (Obviously there were issues of overt racism and sexism that played into these numbers, but the point is that college was an 'elite' institution. (It was 'elite' for both democrats and republicans, so shut the fuck up.))

By 1980 a shift occurred, which continued through 2007. In private schools, the % of As and Bs increased compared to public schools. However, this is not the comparison you should make. Rather, see how the private schools compare over time, the green solid (1960) line is your reference. There is a huge shift to the B grade (in 1980) and then the A grade (in 2007).  You can repeat this for public schools and see the exact same trend: a shift from C --> B --> A from 1960 --> 1980 --> 2007!!! 

The question is why? If we break the data down more finely we see:
From here
Holy crap! WTF happened between 1964 and 1975??? I mean what could it be and why would it matter? Oh wait I know...
The Vietfuckingnam War
The fucking Vietnam War. You may be too young to remember (I am), but there was a draft. A fucking draft, where young men were forced to serve and fight for their lives in a war old white guys deemed necessary. Of course there were exemptions, like: being in college, being a farmer, being in the clergy, being the son of a rich white guy. It is important to remember that almost 60,000 US young men were killed in Vietnam. (Although it is interesting to note that many young men who were exempt from serving in Vietnam were the biggest proponents of the Gulf, Iraq, and Afghanistan wars...funny that.) 

Let me ask you. You are a teacher and the grades you give out will determine whether a young man can stay in the US or have to join the legion of troops being killed in a war justified by a 'weapons of mass destruction' type rationale. The sad thing is that the current wars being fought are not front and center in the minds of Americans, the current wars are being fought by a sliver of a minority of Americans. We might put a magnetic bumper sticker or post some bullshit picture on facebook, but the fact is the vast majority of us do somewhere between jack and shit to support the troops. Rewind a few decades, there is a draft and many men are going to college in order to avoid it. As an instructor in said college at said time, how comfortable are you giving to a fine young man a C, D, or F that may very well ship him off to war if he isn't well connected? I have to admit, I would almost certainly be inflating my grades to protect these young men from war. You can afford to be an ideologue regarding grades in the abstract, but the point is ~60,000 US troops were killed and 150,000 were wounded (FYI the Iraq war amount to 4,500 deaths and 36,000 wounded). As an instructor in the deferment years, you have to own these issues.


A's FTW, from here.
Now that being said, why didn't the GPA drop after the Vietnam War? More importantly, why did it start increasing again starting in 1985 and continuing through today? Remember in private schools in 2007, almost 50% of all grades are an A (compared with 15% in 1960 when the pool of applicants was much smaller and arguable more select)! Look at the left figure, which is analogous to the one above, you can see that the grade of B and F is relatively stable, but Cs and Ds have nosedived and As have soared. You should see that the problem is as bad as possible, the supposedly hardest grade to earn, A, is the most popular grade.

Looking at my school specifically, again this is a national problem not institutional, the trend is similar.
From here, grades for the Fall semester of 2011
This represents that # grades given out in a single semester, Fall of 2011, by year (1000 = freshmen; 2000 = sophomore; 3000 = junior; 4000 = senior; 5000 = graduate). My institution, the bottom row, shows that 38-46% of all grades are As. This is in keeping with national averages. But what does this mean?

My university policy states (see figure right):
from here

So an A is 'outstanding', a B 'significantly above', a C meets requirements, a D fails to meet requirements but worthy of credit, and an F is not even listed. We can assume an F is does not meet requirement and is not worthy of credit.

Think about this. 36-46% of all students in all classes are OUTSTANDING! Presumably another 25-35% are significantly above requirements. This represents 61-81% of all students in all course are significantly above or outstanding! Maybe, just maybe, our bar (and the bar at all schools) is too low.

Why does this matter? Isn't it a good thing that students earn such high grades?

In response let me ask, do you think it was worthwhile to differentiate the A students from the C students from the F students in 1960? I do. It's not that C's are poor, C's represent the student met course requirements. If the course requirements allow most if not all students to receive an A, then maybe the course should have higher requirements.

The problem of grade inflation is important. First, students who really excel in a course should get the recognition associated with that competency. When 25/35 students receive an A, there is no way to differentiate the majority of the students. Are all 25 students really outstanding? What if you increased the requirements would all 25 continue to excel or uniformly show less competency? I expect not, maybe you could identify those truly outstanding students.

Second, what about intercollegiate competition? For example, see below. 
From here.
This represents the grade breakdown of some of the different colleges at the UMNTC. CBS, the college of biological sciences, and CFANS, the college of forestry agriculture, and natural sciences, show striking differences in the %A's earned particularly in the undergraduate levels 1000-4000. This means that a student graduating from CFANS with a GPA of 3.25 could be considered a stronger candidate for a job than the CBS student with a GPA of 3.15. However, it is clearly easier to 'earn' an A in CFANS than in CBS, which is not a factor the job interviewer is aware of. (There's also the fact that the average ACT scores of incoming CBS students is higher than CFANS students arguably suggesting that CBS students are generally stronger than CFANS students as a confounding factor.)

Maybe CBS should lower its standards to be more competitive with CFANS. This is probably an issue that promoted the overall spike in grade inflation nationally. The idea that our students are disadvantaged compared to the other university or our students are as good as private school students or our students are better than public school students just keeps driving grades up and up. The problem is that increase in earned grade comes with a decrease in information available with said grade. Which of the several thousand 3.5+ GPA students is truly remarkable in a particular field? Does the job interview distinguish between these candidates or simply identify those candidates that interview well? Of course this focus on job readiness concerns me for different reasons which will be the focus of a future post(s).

Just to bring this full circle and back to teaching at my institution, which I m sure is similar elsewhere. The UMN policy for a credit hour is as follows:
Student workload expectations per undergraduate credit. For fall or spring semester, one credit represents, for the average University undergraduate student, three hours of academic work per week (including lectures, laboratories, recitations, discussion groups, field work, study, and so on), averaged over the semester, in order to complete the work of the course to achieve an average grade. One credit equals 42 to 45 hours of work over the course of the semester (1 credit x 3 hours of work per week x 14 or 15 weeks in a semester equals 42 to 45 hours of academic work). Thus, enrollment for 15 credits in a semester represents approximately 45 hours of work per week, on average, over the course of the semester.
So for a standard 3 credit course, meets three times a week for 50 minutes, the average student is expected to work 9 hours. This is the average college student, not the average human being. You do not get to average in uneducated impoverished people of the same age. Also you should read that statement carefully. It is not simply the average amount of work for the average college student, it is the average amount of work for that student to receive an average grade, in other words a C. 9 hours per week to earn a C in a three credit course. 45 hours a week to run the gamut of C's, if you are an average student. Some students, almost half in fact, will be below average.

The problem is that the solution is hard. I cannot solve it in my courses. If I give a bell curve distribution, even if it looks like the 1960s (it does), then the most I will accomplish is to drive students out of my courses and into my colleagues. This could negatively impact my yearly evaluations (tenure is a good thing). My university cannot solve this problem. If UMN designates a more rigorous grade distribution, the big ten schools (of which there are 14 at last count) could recruit students at a huge advantage over UMN, which would effect enrollment and tuition dollar revenue. Really this problem needs to be addressed at a national level by the colleges and universities themselves. If it is not, it will not be long before state and federal officials look at the numbers I showed above and started questioning the value of a college education. Indeed this is already happening although the focus is not on rigor. This also feeds into the pervasive idea that a college education is a job training education, it's not although it can be (another forthcoming post(s)). If all these A's we are giving out are not helping students land awesome jobs, then why are states contributing to the funding of these colleges?

Best Professors List

Yesterday, I discussed a movement to make parts of the course evaluations done by students available to prospective students. Basically an in house RateMyProfessor-like service. During that discussion, I noted that there was an additional movement to establish a 'top 30% teachers' list for students to use in choosing their courses. I am against a 'top 30% teachers' list and want to defend that position here.

The idea for a 'top 30%' list seems to come, at least in part, from the Vice Provost for Faculty and Academic Affairs. The list is based on a University of Illinois List of Teachers Rated Excellent, where the Vice Provost was a faculty member (information from the Minnesota Daily). As an aside, one thing I see much too frequently from the University of Minnesota administration is the 'well everyone else is/was doing it' defense. 'Everyone else does it' does not constitute a data driven justification.

Of course having a 'top 30%' list does provide more information to prospective students trying to plan which courses they will register for (FYI: 'for which courses they will register' sounds stupid, so I'm ending that sentence with a preposition deal with it). But this raises a number of questions that need to be dealt with. Including, is the information meaningful in a way the university intends, or at least how I hope they intend? How do we determine the 'top 30%' of teachers? What are the potential ramifications of such a list?

Is the information meaningful in a way the university intends?

This is a key problem in my opinion. I expect the university wants to identify the individuals that best educate the students, generate passion for the material, and promote students well prepared for their subsequent courses or life outside of college. Does anyone believe that students can effectively evaluate whether the instructors have effectively disseminated knowledge or taught the tools necessary to gain and assess knowledge? Do the students know whether or not they are well prepared for their advanced courses? I doubt it.

In fact numerous studies have clearly demonstrated that course evaluations reflect the predicted grade the student thinks they are getting and little else. Based on this, those courses where most students get A's are more likely to have instructors rated as excellent than courses that show a greater distribution of grades. This is not an absolute of course.

Smaller classes are reviewed more favorably than larger classes, which gives an advantage to instructors teaching upper level courses compared to instructors teaching large survey courses. 

How do we determine the 'top 30%' of teachers?

Furthermore, why 30%? Why not 25% or 33%? Regardless how do we establish who the top 30% are? I have my evaluations for the two courses I taught last semester. One was a course for second year students, which I taught for the first time; one was a course for fourth year and graduate students that I developed and have taught for six years.

The surveys list the following 'Core Items' that are scored from: 1 (Strongly disagree) to 6 (Strongly Agree). I'll simply give the mean and standard deviations for my two courses for each Core Item.

  1. The instructor was well prepared for class.
    • 5.85 ± 0.36 and 5.94 ± 0.23
  2. The instructor presented the subject matter clearly.
    • 5.89 ± 0.31 and 5.56 ± 0.60
  3. The instructor provided feedback intended to improve my performance.
    • 5.19 ± 0.77 and 5.17 ± 0.76
  4. The instructor treated me with respect.
    • 5.74 ± 0.44 and 5.41 ± 0.84
  5. I have a deeper understanding of the subject matter as a result of this course.
    • 5.67 ± 0.54 and 5.61 ± 0.83
  6. My interest in the subject matter was stimulated by this course.
    • 5.07 ± 0.98 and 5.17 ± 1.38
To make this more meaningful I'll convert those means to %s and an A-F grade scale.
  1. 97.5 (A) and 99.0 (A)
  2. 98.2 (A) and 92.7 (A)
  3. 86.5 (B) and 86.2 (B)
  4. 95.7 (A) and 90.2 (A)
  5. 94.5 (A) and 93.5 (A)
  6. 84.5 (B) and 86.2 (B)

In both courses my overall (all 6 criteria combined from the letter grade) GPA is a 3.67. Four A's and two B's for each course, which seems pretty damn good. Does this put me in the top 30%? It seems likely, but does this mean I am an excellent teacher or maybe I simply have a great rapport with the students or maybe both. What if these scores are not sufficient to be in the top 30%? Does this mean I am not an excellent teacher? Should I make changes to my courses to improve these scores? Note: I am not suggesting making changes to improve the course, but simply to improve these scores. 

I am still guessing on how the top 30% is determined, maybe the powers that be will simply choose item #6 or not use any of these Core Items but one or more of the 'student release data' discussed yesterday. This was never clearly established.

What are the potential ramifications of such a list?

As I already noted this kind of scoring system may cause some instructors to change their teaching and/or courses to improve their score. Why would they do this? Well not all instructors are tenured or even on a tenure track. These kind of scoring systems are easy for administrators to look at and say 'Ohhh a number with a decimal point that must be important,' regardless of whether or not the number conveys meaningful information. It is also important to realize that the number of students in a classroom directly reflects the money coming into the department offering the course. If there are two sections of the same course taught by different instructors and one is packed and the other unable to fill all the seats based on these scoring systems, which instructor will likely suffer most during yearly evaluations with their departmental chair/dean.

Will there be competition between colleges? If the 30% is a university-wide figure, then problems can emerge. There are already some serious issues with grade inflation here in Minnesota as observed at many universities across the country. However, the grade inflation is not uniform. The College of Biological Sciences and College of Science and Engineering have much less grade inflation than the College of Food, Agriculture, and Natural Resource Sciences (a post for another day). Based on the evidence that instructor evaluations correlate with student grades, then colleges with more grade inflation should on average have better course evaluations. How does this tell which instructors are the 'top 30%'?


As it stands, I do not see the value added to a 'top 30%' list anymore than the release of some course evaluation data. The point seems to be one of treating the students as customers. I understand that we as faculty and staff have responsibilities and duties to our students and the community as a whole. I am not espousing a view where students are fortunate to hear me pontificate on microbiology. But universities are not corporations like Burger King trying their best to please and appeal to the customer more than McDonald's (aka University of Wisconsin).