PixelProf

joined 1 year ago
[–] PixelProf@lemmy.ca 1 points 1 week ago

I can appreciate that. Arguably these folks might be more likely to vote because they aren't stuck in the mud of nuance, answers they see are more clear and obvious and the other ones may as well not exist. Not contemplation of what they don't know, in a way.

But - on the other hand, as mentioned we can't really pick who votes without opening Pandora's Box - and the best thing we can do is not to punish, but to rehabilitate. To model stronger behaviours, to identify why they behave in this way, and to try to help them build stronger critical thinking skills. Punishment is polarizing.

Fun, maybe related note: I've researched some more classical AI approaches and took classes with some greats in the field whom are now my colleagues. One of which has many children who are absurdly successful globally, every one of them. He mathematically proved that (at least this form of AI) when you reward good behaviour and punish bad behaviour (correct responses, incorrect responses), the AI takes much longer to learn and spends a long time stuck on certain correct points and fails to, or takes a long time to, develop a varied strategy. If you just reward correct responses and don't punish incorrect responses, the AI builds a much stronger model for answering a variety of questions. He said he applied that thinking to his kids, too, to what he considered a great success.

I think there's something to that, and I've seen it in my own teaching, but the difficulty now has been getting students with this mindset to even try to get something correct or incorrect in the first place, so they just.... Give up, or only kick into action after it's too late and they don't know how to handle it at that stage because they didn't learn. Inaction is often the worst action, as it kills any hope of learning or building the skills of learning.

[–] PixelProf@lemmy.ca 5 points 1 week ago* (last edited 1 week ago)

Yeah, this point about really needing time is pretty real. I recently came to the conclusion that some folks really just need to retake the core courses multiple times (and seeing if we can change this pattern) because it just takes them a long time to unlearn helplessness in the field and adapt.

And absolutely, as you've said, I find those who do adapt go from someone taking our most basic course three times to becoming a top student. Those who don't adapt fall to cheating and/or dropping out. I usually have about 500-800 students per term, and with about 20-30% falling into this category with more each year, one-on-one interventions are rare and you usually only catch them on their second time around once they finally heed our requests to come talk to us.

I'd be curious what other fields work with this so I could go read some papers or other materials on these mindsets, it sounds like there is quite an overlap to what we've been experiencing, I appreciate these insights!

Edit: Oh, and adding that I've spoken to some researchers in trauma informed education and I imagine the overlap here is high in terms of approach - recognizing how different behaviours can be linked to trauma and considering the approaches that can be taken to ease them back into stronger academic habits. It's been a while since my talks, but this could spark some more, as I hadn't quite connected the rote memorizes to this. Seems quite feasible for at least a subset.

[–] PixelProf@lemmy.ca 8 points 1 week ago

Yeah, you can feel it pretty quickly in an interaction. I like how the other comment put it, where it seems like they are stuck in rote memory mode. Having a list of facts in their head but no connections between them, no big picture capability. I recently had a student who seemingly refused to read the six bullet points describing a problem, and couldn't comprehend that they described requirements, not step-by-step instructions. Without step-by-step instructions, this group flounders, and what should be insignificant details stand out as blockades they can't get past because they can't distinguish the roles of the details.

Reasoning blindness is an interesting term for it. Bloom's taxonomy of learning, which has its controversies, stands out to me here; it's like they are stuck at recall problems, maybe moving up to understanding a little bit but unable to get into using knowledge in new circumstances, connecting them, or being able to argue points. It works well for certain testing, it's a great skill to be particularly astute in for many lines of work, but it really is a critical thinking nightmare.

[–] PixelProf@lemmy.ca 14 points 1 week ago (2 children)

Really great point - purely rote learning is definitely a major piece of this category, if not the category in itself. Basically an inability to move up Bloom's taxonomy from the first level or two. I very recently spent hours with a student who had this exact issue - they tested well, but couldn't even begin to do the applied work unless they were walked through it, precisely, step by step. Zero capability of generalizing, but fully capable of absorbing and recollecting facts... just no understanding associated with it. No connections.

That gave me something to think about, thank you!

[–] PixelProf@lemmy.ca 54 points 1 week ago* (last edited 1 week ago) (11 children)

I was once teaching a student introductory programming when I was in my undergrad.

The problem was to draw two circles on the screen of different colours and detect when the mouse is inside of one.

I said, "So our goal is simple: Let's draw a circle somewhere on the screen. Consider what you'd tell me as a human - I've got the pencil, and you want to tell me to draw a circle of a certain size somewhere on this paper. We have three functions. Calling a function will draw a shape. Each function draws a different shape. We have rect(), circle(), and line(). Which of these sounds like the one we want to use? Which would get me to draw the correct shape?"

".... Rect?" "Why?" "It draws a shape." "What shape would rect draw?" "I don't know." "Guess." "A circle?" "Why do you think that?" "We need to draw a circle." "If I said that rect draws a rectangle, which of the three functions would we want to use then, to draw our picture?" "Rect?"

I've now been teaching for many years, and those situations still come up a lot. When I put up a poll in class, with the answer still written on the board, about 25% of people in a 100+ student class will get it wrong - of people who were not only admitted to a competitive university program, but have passed multiple prerequisite courses to be here.

Not only is it unknown gaps in knowledge, there is just a thought process I haven't been able to crack through that some people really can't see what is immediately before them.

[–] PixelProf@lemmy.ca 14 points 2 weeks ago

I think centralization played a big role in this, at least for software. When messaging meant IRC, AIM, Yahoo, MSN, Xfire, Ventrilo, TeamSpeak, or any number of PHP forums, you had to be able to pick up new software quickly and conceptualized the thing it's doing separate from the application it's accomplished with. When they all needed to be installed from different places in different ways you conceptualize the file system and what an executable is to an extent. When every game needs a bit of debugging to get working and a bit of savvy to know when certain computer parts are incompatible, you need a bit of knowledge to do the thing you want to do.

That said, fewer people did it. I was in highschool when Facebook took off, and the number of people who went from never online to perpetually online skyrocketed.

I teach computer science, I know it isn't wholly generational, but I've watched the decline over the past decade for the basics. Highschool students were raised on Chromebooks and tablets/phones and a homogenous software scene. Concepts like files, installations, computer components, local storage, compression, settings, keyboard proficiency, toolbars, context menus - these are all barriers for incoming students.

The big difference, I think, is that way more people (nearly everyone) has some technical proficiency, whereas before it was considered a popular enough hobby but most people were completely inept, but most of students nowadays are not proficient with things past a cursory level. That said, the ones who are technically inclined are extremely technically inclined compared to my era, in larger numbers at least.

Higher minimum and maximum thresholds, but maybe lower on average.

[–] PixelProf@lemmy.ca 12 points 1 month ago

I'm Canadian, but the point they are making is that there won't be another election; VP is the backup for a dead president. Not to tinfoil too hard, but placing an obvious puppet in as VP with a clearly declining candidate who is popular seems like a strategy straight out of House of Cards.

[–] PixelProf@lemmy.ca 6 points 1 month ago* (last edited 1 month ago)

Insane compute wasn't everything. Hinton helped develop the technique which allowed more data to be processed in more layers of a network without totally losing coherence. It was more of a toy before then because it capped out at how much data could be used, how many layers of a network could be trained, and I believe even that GPUs could be used efficiently for ANNs, but I could be wrong on that one.

Either way, after Hinton's research in ~2010-2012, problems that seemed extremely difficult to solve (e.g., classifying images and identifying objects in images) became borderline trivial and in under a decade ANNs went from being almost fringe technology that many researches saw as being a toy and useful for a few problems to basically dominating all AI research and CS funding. In almost no time, every university suddenly needed machine learning specialists on payroll, and now at about 10 years later, every year we are pumping out papers and tech that seemed many decades away... Every year... In a very broad range of problems.

The 580 and CUDA made a big impact, but Hinton's work was absolutely pivotal in being able to utilize that and to even make ANNs seem feasible at all, and it was an overnight thing. Research very rarely explodes this fast.

Edit: I guess also worth clarifying, Hinton was also one of the few researching these techniques in the 80s and has continued being a force in the field, so these big leaps are the culmination of a lot of old, but also very recent work.

[–] PixelProf@lemmy.ca 14 points 1 month ago (2 children)

Lots of good comments here. I think there's many reasons, but AI in general is being quite hated on. It's sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here's a few perspectives:

  • Training data is questionable/debatable ethics,
  • Amateur programmers don't build up the same "code muscle memory",
  • It's being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
  • The time saved writing code isn't being used to review and test the code more carefully than it was before,
  • The AI is being used for problem solving, where it's not ideal, as opposed to code-from-spec where it's much better,
  • Non-Local AI is scraping your (often confidential) data,
  • Environmental impact of the use of massive remote LLMs,
  • Can be used (according to execs, anyways) to replace entry level developers,
  • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
  • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what's important and don't see the skills they strengthen along the way to the answer.

I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

I think there is a lot of reasons to hate on it, but I think it's because the reasons to use it effectively are still being figured out.

Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don't really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It'll take time to adopt and adapt.

[–] PixelProf@lemmy.ca 3 points 2 months ago (1 children)

Message not clear; some man in the mirror is now telling me to change my ways, and now they're angry and crying and it's making me uncomfortable and feel alone. The man in the mirror said the world would be a better place if I changed, but why can't they change? After all, they sure don't seem like a good person, you can see it in their face. Disgusting.

[–] PixelProf@lemmy.ca 5 points 2 months ago (3 children)

Maybe the discomfort of looking at the person on the other side of the mirror, with their hate, sadness, and confusion, is part of what fuels their hatred.

[–] PixelProf@lemmy.ca 17 points 2 months ago

As someone who researched AI pre-GPT to enhance human creativity and aid in creative workflows, it's sad for me to see the direction it's been marketed, but not surprised. I'm personally excited by the tech because I personally see a really positive place for it where the data usage is arguably justified, but we either need to break through the current applications of it which seems more aimed at stock prices and wow-factoring the public instead of using them for what they're best at.

The whole exciting part of these was that it could convert unstructured inputs into natural language and structured outputs. Translation tasks (broad definition of translation), extracting key data points in unstructured data, language tasks. It's outstanding for the NLP tasks we struggled with previously, and these tasks are highly transformative or any inputs, it purely relies on structural patterns. I think few people would argue NLP tasks are infringing on the copyright owner.

But I can at least see how moving the direction toward (particularly with MoE approaches) using Q&A data to support generating Q&A outputs, media data to support generating media outputs, using code data to support generating code, this moves toward the territory of affecting sales and using someone's IP to compete against them. From a technical perspective, I understand how LLMs are not really copying, but the way they are marketed and tuned seems to be more and more intended to use people's data to compete against them, which is dubious at best.

view more: next ›