site banner

Small-Scale Question Sunday for March 5, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

I'm grading some district reporting requirement art tests. Each art teacher has an identical test, and grades about 300 of them a year. The current iteration is marker, crayon, pencil, and paper, where we grade them by hand, then manually enter the number for each question into a database. It is horribly tedious, because grading them requires judgement and ambiguity (is that doodle textural? It's not very good texture, but they aren't very good at drawing...), but we aren't actually learning anything very useful from them, and they aren't actually aligned with the national standards. The other teachers and district office are open to a different approach, as long as it produces a numerical score and is less useless than the current iteration. The kids have Chromebooks they could bring, if needed.

Ideally, we would be assessing the national art standards (students can come up with an idea, produce something using that idea, connect it to some existing art, and articulate what they made the choices they did), but we haven't been able to figure out a way to assess that in half an hour or so and grade it in about two minutes, so currently we're assessing elements of art and a few common concepts (line, color, texture, symmetry, variety, geometric vs organic shapes).

It there a way to design an automated test in the future, which isn't primarily a reading comprehension test?

I think you could do good gradeable art tests using human proportions and perspective work, both of which can be made to have "right answers". Possible using graph paper if the student needs to turn in a drawing. Then have quick ways of counting tiles uses for proportions etc. And just as an artist I would feel way more comfortable grading that than various stylistic choices.

Thanks!

I think that might be too advanced for these kids -- I teacher perspective in 4th grade, and accurate portraits in 5th grade, and have a fairly high non-completion rate in both cases, so wouldn't be willing to go much younger.

The test in question is for second graders, so many can't write coherently either, which is why I mentioned not wanting a reading test.

My actual priorities for lower elementary are something like:

  • Write their name legibly, on the front, right-side up

  • Some ability to use the materials (will paint the paper, not something else; won't destroy the tip of the brush, mostly won't go over the same spot until there's a hole, spread out the watercolors with water, rather than using it all up dry and complaining about not having any left, that kind of thing)

  • Roughly follow the instructions. If we are painting a large monochromatic blue cat, they will not paint little stick figures of their family with a white background, and complain about not having the colors they want)

  • Keep with it until it's done, even if it's not what they want. Don't crumple it up and throw it in the trash, then sneak up to get another paper or throw a tantrum and cry or something. Don't cross out the drawing they didn't like in permanent marker as though it were a word in a written draft; erase it or else work with what's there.

The current test is about labeling various things with elements of art words, and following instructions to, for instance, color one part with primary colors, another with cool colors, another warm, another cool, and so on. The vocabulary covered is basically alright (though unrelated to the standards). It turns out many second graders aren't able to label, and a large part of the test was spent on teaching them to label, and about half still couldn't/didn't want to/were sad that their art time, which they only get twice a month or so, was spent on trying to label things. It is also confusing, because it involves doing a set of 12 different tasks (label, color, draw, "add texture") all on the same pre-outlined paper. Many are distracted by trying to make some part or other pretty. One ended up with five Sonic the Hedgehog people on it. I think this is developmental.

Every year we, the art teachers, talk for several hours about whether we have to do this (yes, we do), and, if we have to do this, what might be the easiest way to do it that could be useful, or at least harmless. Every year we don't know, and go back to this test. Given that it's not testing anything we or the kids much care about, we would prefer that it be something self-grading, which doesn't require writing or typing, or perhaps even reading skills. Like it would say, out loud, to the kid "color the butterfly with the three primary colors" and the kid would drag red, yellow, and blue onto the butterfly, and then it would score that.

What I think we might go to is a dozen little boxes, with a task written over each box, which might at least mitigate to confusion of having to do all the tasks on top of the same picture.

Ah yeah that almost seems like a developmental psychology problem of some kind at that age. I can understand wanting to have a standard just to give kids direction or expectations but that's out of my realm of expertise at that point. Good luck!

Man, I would have been furious if people wasted my art time on that stuff--I'm suspicious of whoever came up with whatever theory that lesson came from.

4 year old me was going to colour his robot however the fuck he wanted, and if you wanted labels you'd better not take issue with how he spelled "disintegrator"

Do you have (lossless) digital copies of graded artworks from the past? If so you can train a CNN to output the grades, something along those lines. Deep Learning seems to be the obvious answer to automating hard-to-systemize yet repetitive work. It doesn't even have to be low error, you can do a last-second QC on the final grades.

If you want something more involved, you could conjure up a system where conceptual similarity could be assessed using some sort of distance metric in the latent space of stable diffusion, quality could be determined as before, and other things could also be extracted from the latent space if enough thought is put into it.

I am aware that this is far outside the budget (might have to hire a ML engineer or two to build and maintain the software) and Overtons window of public education. So individually you might just try to apply some sort of assembly line technique for your own set of artworks to grade. Give all the artworks a line grade, then a color grade, then a texture grade,... etc, and finally just sum up the weighted averages using excel or something; You might be fucking over some students whose artworks are greater than the sum of their parts or whatever (you can add a " ceiling function holistic multiplier" to counteract this ), but definitely faster than grading every piece "holistically" one by one, should be fairly statistically reliable as well. This assembly line system is the goto for where you need to sort through piles of candidates such as graduate schools or job openings (and fucks over "greater than the sum of the parts" candidates all the time).

Thanks!

Yeah, that sounds way too advanced for us, we could probably make a lot more money if we had those kinds of skills.

The second proposal might be possible -- my mother said that at her school they were given paid PLC time to do that with a test, and we spent three hours arguing about the test already this year. I we had assembly line graded them instead, they would be graded already by now.

These tests don't have to effect the kids' grades, so I don't think it will be problem from the kids' perspective.

Ideally, we would have a school approved app where we could upload a picture of a landscape, and say "Find the horizon." Maybe more than one for multiple versions. We would mark an area close enough to the horizon to qualify as right. Then it would show another picture: organic shape. We would designate which parts were the organic shapes, they would click, and it would grade based on if they clicked in one of the specified areas. Something like that. This seems like it should exist, but this is the only test I'm involved in making, so don't know for sure.

This sounds concrete enough I might be able to ask the educational technologist about it, which might be a lead, anyway.