Project Sisyphus
Project Sisyphus

Responses From Others

  1. With regard to Question III of the sisyphus project, I have to say that I am not comfortable with national standards. In about a half-hour, I will leave to give a lecture on soils to an Environmental Geology course. Every time I teach the course, I give this lecture in a slightly different way, and I feel that that situation is good for me and for the students, because I do not think that my job is to deliver facts. I want studetns to recognize a relationship between what I am saying in class and what they have observed in their lives - some of them actually do observe things, so at least those students have mental contexts into which they can fit the lecture material.

    How do national standards play a part in this approach to education, an approach which I am sure is practiced by most experienced teachers?

    With regard to documenting improvement, I can suggest one apaproach. I am experimenting with in-class exercises in my course. Lecture stops and the students work with hand-outs. They work in small groups, in a class of 60 students, in a lecture hall that is not suited to this approach. But it seems to work. How do I determine the efficacy of this approach? Last semester, when I started doing this, I gave them the same multiple-choice final examination I have given for the last five semesters. Some of the questions change each semester, but the kinds of questions do not. The mean grade did not change last semester (the mean is a very stable statistic) but the variance was considerably lower than in the past (with one exception, which I can explain). So, until I get more data, I am hypothesizing that having students work together brings them closer together "knowledge-wise" than does the straight, traditional lecture approach.

    This is an internal method of assessment, independent of what people do in other places, and, I think, is more meaningful than a comparison with scores on a national standard, because it tests the efficacy of a method, not the number of facts students have memorized for a test.

    Pat de Caprariis
    Dept. of Geology
    Indiana-Purdue Univ
    pdecaprr@iupui.edu

  2. I agree that such a test would be useful. The physics test changed my life as a teacher. I have tried to write some appropriate questions over the last few years, with great difficulty. The important thing is that the Force Concept Inventory and the Mechanics Baseline Inventory are both conceptual tests, not tests of factual content. As concept tests, their questions are very carefully worded and the answer choices have been developed in response to specific misconceptions documented by physics education researchers. These tests look nothing like the typical questions found in intro physics books and test banks. They require no calculations no knowledge of formulae, and almost no technical vocabulary. In geology, if one were to use the sorts of factual recall questions found in typical intro geology test banks, one would get results that would speak only to factual retention, not to understanding. The assessment must match the goals. If we wish to test development of concept understanding, then we need to carefully identify the core concepts in geology, identify student misconceptions that block understanding, and then develop test questions to address those concepts. Geologists seem to lack the data that physicists have on student misconceptions and that is what has stymied me so far.

    Dave Smith
    La Salle University

  3. Just two comments regarding this thread on standardizing material and exams for intro Geo classes.

    First: comparing physics and geology is like comparing birds and dino's - they are both science but that is where the similarity ends. Physics, like chemistry and especially math do not have "out of the classroom" visualizations for students. As such, unlike Geology they are the same from New York to Houston to Calif.. This leads me to,

    Second: I do not know how everyone else teaches intro Geology classes, which have about 95% non-majors and maybe a few majors (and hopefully a few future ones) but I, at the risk of skipping a topic or two, structure my classes towards what students see every day. I know I cover neotectonics and Cenozoic clastic sedimentation in much greater detail than say point bars, deltas, Paleozoic tectonics etc. - because that's what we have here, Tippecanoe transgressive sands are not going to be an integral part of my students life, but earthquakes and the Sierra Nevada are. I think a list of concepts which we think all students should know would be very useful but not standardized exams.

    Jon Sloan
    Dept. of Geology
    CSUN
    at or near Northridge (pending the next quake) 91330

  4. I'd like to second Jon Sloan's comments about the difficulties of standardized tests in freshmen geology courses. We can't get an agreement on the topics and content to be covered among the five instructors teaching Physical Geology within our own department, even when shared labs are the driving force for such an agreement. The fact is, we too have 95% non-science majors, and the content of our course is customized not only by geography (as well explained by Jon), but also by individual interest and expertise. In my opinion, this type of course, directed at non-science majors, should NOT have a standardized content if our goal is to make our science of interest and understandable to the typical student body.

    It is true that the textbooks have a relatively standardized content, however I have never met an instructor who felt the entire text material could be taught in a single semester. Given the necessity of culling material, and the merits of customizing that material based on geography, interest, expertise, etc., standardized exams would only serve to make the course content more restricted and less interesting and less relevant to the majority of those taking the course!

    Lastly, I would also emphasize that in all cases, restricting content must be done without sacrificing standards, which is a seperate issue entirely.

    Bill

    William R. DupreÄ
    Associate Professor
    Department of Geosciences
    University of Houston

  5. I agree completely. It matters less what you teach than how you teach.

    Warren

  6. I'm starting to worry about the baby in the bathwater! When these Q3 comments started rolling in, I generally agreed, as I am not keen on national standardised tests on philosophical grounds - it is too easy a step, I fear, from these to having an even more explicit form of contractual teaching imposed on Universities than has come to be the case. Having said that, generally (wish I could remember the citation) benchmarking should be undertaken to lead to the exchange of beneficial practices (although my University has been using it as a league-ladder tool). Specifically, I think Q3 was raised for this purpose. However will we know if there are more or less beneficial practices in teaching mass-market earth science unless we do some reasonably controlled testing? So, rather than retreating from the idea by citing the local differences between subjects, which differences are definitely valid, shouldn't we instead look at the concepts which are likely to be seen as shared goals in teaching such subjects, whether they be in Maine, Modesto, or even Melbourne.

    A practical proposal might be to a) find out how many folk use some of the question sets made available with introductory texts, then b) find out which questions are commonly actually used from those sets, and c) think about whether the correlation of responses to the common questions might throw some light on teaching practices. (I haven't used such question sets, but I'm interested in starting.)

    (I teach beginner Geology for Engineering students to classes of 30-100, and it isn't much like our beginner Earth Science classes for Science students. I also teach Geophysics to classes of 30-40 at senior undergraduate level. I'm always surprised at what they learn, though.)

    Lindsay Thomas
    School of Earth Sciences, The University of Melbourne
    Victoria 3010, Australia

  7. I cannot disagree with any of the points Lindsay is making. Surely we can learn from one another, and all benefit thereby by improvement in our teaching. Perhaps what concerns me a little about standardized testing is that it risks locking into place what is ordinarily a dynamic process. We could all agree, for example, to put our best efforts over the next year or so into developing some sort of standards or essentials that would serve as a baseline for evaluation of geology/earth science undergraduate courses. Having reached that point, however, I wonder if we would still retain the flexibility in curriculum development that we need in order to respond to changes in career options, changes in student interests, and changes in the academic establishment? Perhaps so, but I would not want to forfeit the right to, for example, give up teaching standalone lecture-lab courses in mineralogy, petrology, paleontology, or whatever, in favor of, say, an earth materials course or a global change course which might be more project-based with an emphasis on data acquisition and manipulation, report writing, oral presentation of results, and so forth. In other words, I think there is already a lot of creative thinking going on in redesigning undergraduate earth science curricula, and more should be encouraged. I would not wish to stifle this kind of initiative by imposing some set of standards that might be unintentionally restrictive. I see this in engineering and other disciplines and I am not sure we need it.

    Warren Huff

  8. I know the programme has rolled on, but over thet past weeks I count myself to have been fortunate enough to have read an interesting discourse on the relative merits and demerits of the humble Textbook. I will now de-lurk.

    From an industry point of view, we still have good use for the textbook, large numbers of which are written in house for specific equipment pieces, and techniques of use. (We also deliver to the field with all the texts on .pdf format for ease of transport.) We still need people who will pick it up read it through and ask questions related to it. Even if those questions are based on "Must I really do it this way? Or can I not do it by a safer/cheaper/ more efficient method?" So a there is a direct plea to all educationalists of the geosciences keep on bringing out inquisitive people. As a trainer in industry, the worst case scenario that Iam faced with is people who just sit and absorb, requiring to be spoon fed absolutely everything. You probably feel much the same !

    Enough digression from the topic:

    Where we expect personnel to be on the move frequently such as in the oil industry, a text has an advantage. Texts do not need battery life in the way a laptop does. It's never going to get shutdown by airlines insisting that the electrical energy may affect the navigation systems in aircraft! If you are struggling with selling the printed works to students perhaps casually informing them that they are read 25% faster than the electronic word may bring a bit of focus back. The down side is that it has to be well written to be stimulating, and that is also linked into personal "tastes" and the qualities of the author. A lot can be attached to the electronic work in terms of video/sounds demonstration of processes that makes it a very competitive medium. The only way a text could compete is to go to the live action in the field !

    Perhaps some of you would be suprised at the lack of, or low quality of Intranets within industry. Maybe you wish to warn your students that what they have now in front of them may be the best they get untill they have had some years in industry and industry has cought up! I am employed by a large multi-national service company. We have frequently come across problems where large oil companies have outsourced their network management to a third party This is on a strict budget and perhaps the oil company's management do not fully comprehend the system they have outsourced and now they have one with a limited use. At least two of the major oil companies cannot run anything more than IE3! That becomes a problem where companies are trying to feed their geologists "live" drilling data requiring a minimum network browser standard of IE4. So it is some indication of the system clashes that people have to learn to cope with outside education. I feel that will change again in the future, as the vogue for outsourcing gets reviewed on a case by case basis.

    In my own case I have chosen text-based materials over e-publishing. I decided that for a basic level Electric Log Interpretation Course for geologists, I would use Asquith's book from the AAPG. The benefits were that I did not have to compete for computers in training rooms or guarantee everyone had access to a PC to get at the material when they were away from the classroom. In the cost benefit-analysis I could not validate making my own course if I was using less than 1000 copies. Which approximates to more than 5 years supply! The book I chose to use is now over 14 years old, a standard work known by many within the industry, and it still delivers. Who is using electronic material today that is 14 years old and still delivers the required message?

    In another case I am looking to use in house publishing to re-write a self teach drilling and engineering manaul for logging geologists. There is little point in me writing the whole text as I can use the current Directional Drilllers, Surveyors, Drilling Fluids and Wellbore Construction Engineers manuals which are already in place and in .pdf format; and html link to the relevant chapters/pages and run it all off a CDROM. (Although I am still faced with an access problem to PC's.)

    I hope that texts and e-publishing will continue to co-exist, each has its strengths which can be played to. It is a case of knowing your subject matter, audience and presenting accordingly.

    Thanks for the interesting comments over the past weeks. For the person who noted that they still kept their college texts, I still have mine also and they are still in use 20 years on!

    Regards

    Stuart Pressage

  9. I have been waiting for Don Mackenzie (resident lurker in Darby, UK) to comment about assessment but .... Triads is what he might talk about. Might be worth looking. Perhaps Roger Suthern can add to this effort. I can see where assessment could lead to standardized or normalized tests .... and perhaps we, as instructors, might be evaluated based on how our students do .... seems as though someone has proposed to evaluate school systems/districts in this fashion.

    Is the idea proposed from Melbourne worth pursuing? Take a look at the link above. Are there questions that can be asked that allow testing of mastery of a concept ... questions such that response a suggests one set of though processes and response b another?

    Of course, we might be forced into a discussion about what should students take away from an introductory geoscience course ..... or what do they bring in ...... do we add value.

    I agree with Lindsay's comment about "shared goals" and I too am interested in taking this another step

    John Butler

  10. Clearly few of us would relish teaching material to meet a set of standards imposed by others. But as Lindsay pointed out, that isn't really the point. If we want to improve student learning (not necessarily the same thing as improving teaching) we need to do two things:

    1. identify good practices (as discussed by others, Warren, Pat, ...);

    2. encourage others to adopt them.

    The second step may be more difficult than the first. There is a steadily increasing body of literature that discusses how to improve improve learning. Like Pat, I have added in-class group exercises to my courses and they have changed the whole classroom experience for the better (and I have the anonymous surveys to prove it). I have also instituted short (2-4 question) reading quizzes every day to allow me to get away from regurgitating the text and to allow for more discussion in lecture (approx. 70 students). Miraculously, several students responded on the survey that they liked having a daily quiz because it provided incentive for them to study (only 2 complained). So class now typically involves quiz/brief lecture/group exercise with other components when necessary (student observations of images; group quizzes; minute essays). Initial results reveal higher average scores on the same questions on this semester's exams vs. last semester when class was more "traditional". However, results are far from conclusive and it is not clear that a multiple choice test is really assessing improvements in learning.

    I'm convinced that this is a better method than the "sage on the stage" model. But what will it take to convince my colleagues of this? Improved test scores on our own exams can be explained away by a variety of factors (e.g. easier exams, less content, "easier" topics, "hints" in lecture) and the rest of the evidence is pretty anecdotal. In discussions about various teaching methods, one of my colleagues labeled reading quizzes and in-class discussions "tricks". For some, the only way they will be convinced of the utility of alternate teaching methods is to have an unbiased outside evaluation method. The most familiar method is a standard test or group of questions. Are there alternatives? Is there another mechanism that could be used to evaluate improvements in learning?

    David McConnell

  11. I'm involved in an assessment project in the UK. It is based around the TRIADS system produced by Don Mackenzie at Derby University, and provide full multimedia computer-based testing. We have over 20 UK Higher Education partners and a similar number of subject areas. What are my comments on this question?

    1. I believe our product is an earth scientist. We like to believe that this beast has at least a basic knowledge of his/her subject, has the skills to find out more and the understanding to analyse, synthesise and present.

    2. How do we quantify the quality of our product? How do we check our quality from year to year? How do we check our quality against somebody else's?

    The key things here are learning under 1 above and assessment under 2. Teaching may help in this process, but it is now well known that different students learn differently so what is "good" teaching for one student may be "less good" teaching for another.

    Assessment has to be keyed in to course aims and objects by overtly addressing intended learning outcomes. Some survey work that I did demonstrates something which I think we already know, that there is much commonality in earth science programmes of study. For example, a sedimentology module will deal with the various classification schemes for sediments, with the various methods for recording and interpreting data (logs etc), and the processes and environments that can be deduced from these. Of course, the sample materials studied will differ, but the underlying generic core of knowledge, skills development and understanding are common. They represent a benchmark.

    Assessments that address such benchmarks can be used to test whether students have attained that level. I believe that computer-based assessments have great potential in this area because they are objective and can be set up to address actual learning outcomes, or benchmark knowledge, skills, understanding etc. However, other forms of assessment - essays, reports, dissertations - are still needed to test the range of skills and understanding not easily done by computer. However, such modes of assessment do suffer from the problems of objectivity associated with human marking.

    I realise that talking of benchmarks brings up the question of "national standards", or even "international standards", which I am uneasy with myself (would I agree with the "agreed" benchmarks?). However, the current presence of anomalous outcomes from year to year within an institution or between institutions must make us also feel uneasy about not having "national standards".

    In my own part of the UK, three comparable institutions in terms of size and quality of student intake (in all subject areas), have markedly different graduate output quality if final degree results are the measure. Degrees are classified in the UK, and the key measure is the proportion of first and upper second class degrees (60% or greater mark average) against lower second, third and fail (less than 60%). Three institutions within 100 km of each other have intake quality measured (in 1998) at 180/250, 193/250 and 184/250, but percentages of graduate student scoring 60% or better at 45%, 58% and 64% respectively. Now of course, it may be that the teaching at the third institution is absolutely fantastic and leads to greater value-added when it comes to measuring the learning attained. However, it may also mean that there are different standards operating. We do not have the data to test which hypothesis is true. This ought to make us feel uncomfortable as educators. Any test will involve some form of "national standard". My suspicion is that such things will happen. The UK government has already had benchmarking pilot studies in some subject areas with the implication that these will take place in all subject areas. It is probably better to help frame benchmarking than it is to ignore it and then complain when you don't like the outcome.

    To sum up, we should be wary of imposed benchmarks, but we should not be so smug as to believe that our current practices are really that good. Perhaps some attempts at designing some sets of benchmarking questions and trying them out at different institutions might be a useful exercise. TRIADS would be an excellent medium for the tests.

    Alan Boyle

    University of Liverpool

Return