riskable

joined 2 years ago
[–] riskable@programming.dev 3 points 1 month ago (1 children)

You see, that's the thing: In order for the US to get to that point, the people must first NOT be chomping at the bit, fantasizing about ripping unelected bureaucrats like Stephen Miller to shreds the moment they see him in person.

Usually—the way this happens—is that you have a strongman coming to power, promising to bring justice to people like Stephen Miller. Not supporting them.

I honestly don't think there's enough support behind Trump at this point to pull that off. In fact, a simple marketing campaign pointing out that it's not just Trump but the entire Republican party that is responsible for this mess we're in, would do wonders.

Republicans—the ones sitting at home watching this play out on Fox News—aren't getting the right kind of propaganda for Stephen Miller (or Trump's other underlings) to survive past Trump. Even if he doesn't get torn to shreds by some angry mob, he's committing crimes on the regular which will result in prosecution when a new administration comes around.

The next administration won't be as delusional about preserving tradition when it comes to prosecuting their predecessors. Trump made sure to throw that entire concept into the East Wing right before he had it torn down.

[–] riskable@programming.dev 3 points 1 month ago

Ugh. You're right, of course. We're surrounded by lizard-brained, uncivilized cave people who still believe in fairy tales.

Tell them that their religion is a fantasy without evidence, though, and now you're somehow the unreasonable one.

[–] riskable@programming.dev 38 points 1 month ago (5 children)

So let me get this straight: Stephen Miller is so universally hated that if he doesn't house himself on a protected military base, he fears for his life and family. His response to this is to double down on his continuous campaign of human rights violations‽

Dude! You can only live "safe" like that for three more years. Not even that long if Trump dies of a stroke/heart attack (which seems increasingly likely). Vance isn't going to protect you like this!

Now's the time to start making friends in Nazi sympathizing countries.

[–] riskable@programming.dev 7 points 1 month ago* (last edited 1 month ago) (2 children)

It's much, much more complicated than mere rehabilitation VS punishment/salvation. When someone goes to prison for a minor drug offense—like this guy—what exactly are we "rehabilitating"? I seriously doubt he had a real addiction.

Then there's things like organized crime: By imprisoning gangsters we're simply removing them from society so they can't commit crimes against people who aren't also in prison. But this doesn't solve the problem of a gangster being able to commit crimes such as ordering a murderer from within prison (e.g. via their lawyer or a secret cell phone).

For such people, we have the death penalty (presumably).

Then there's white collar crime and fraud. Do those people belong in prison or should they instead be forced to live in "affordable housing" with one too many people sharing the same home, work a minimum wage job, having 100% of their wages given to their victims, and forced to regularly work overtime? Oh sorry, that's my "real justice for rich fraudsters" fantasy 😁

For health insurance executives, we should also make them wait on hold every day to get someone to push the button that unlocks the door to their room. Once a year, we'll make them go through a lengthy bureaucratic process in order to prove that they need access to running water. It should take at least a week.

[–] riskable@programming.dev 1 points 1 month ago

For reference, every AI image model uses ImageNET (as far as I know) which is just a big database of publicly accessible URLs and metadata (classification info like, "bird" ).

The "big AI" companies like Meta, Google, and OpenAI/Microsoft have access to additional image data sets that are 100% proprietary. But what's interesting is that the image models that are constructed from just ImageNET (and other open sources) are better! They're superior in just about every way!

Compare what you get from say, ChatGPT (DALL-E 3) with a FLUX model you can download from civit.ai... you'll get such superior results it's like night and day! Not only that, but you have an enormous plethora of LoRAs to choose from to get exactly the type of image you want.

What we're missing is the same sort of open data sets for LLMs. Universities have access to some stuff but even that is licensed.

[–] riskable@programming.dev 1 points 1 month ago* (last edited 1 month ago) (3 children)

Listen, if someone gets physical access to a device in your home that's connected to your wifi all bets are off. Having a password to gain access via adb is irrelevant. The attack scenario you describe is absurd: If someone's in a celebrity's home they're not going to go after the robot vacuum when the thermostat, tablets, computers, TV, router, access point, etc are right there.

If they're physically in the home, they've already been compromised. The fact that the owner of a device can open it up and gain root is irrelevant.

Furthermore, since they have root they can add a password themselves! Something they can't do with a lot of other things in their home that they supposedly "own" but don't have that power (but I'm 100% certain have vulnerabilities).

[–] riskable@programming.dev -2 points 1 month ago (2 children)

stole all that licensed code.

Stealing is when the owner of a thing doesn't have it anymore; because it was stolen.

LLMs aren't "stealing" anything... yet! Soon we'll have them hooked up to robots then they'll be stealing¹ 👍

  1. Because a user instructed it to do so.
[–] riskable@programming.dev 1 points 1 month ago

I guess I get to merge my code and never work on this project again.

This is the way.

[–] riskable@programming.dev 230 points 1 month ago (4 children)

FYI: That's more Windows games than run in Windows!

WTF? Why? Because a lot of older games don't run in newer versions of Windows than when they were made! They still run great in Linux though 👍

[–] riskable@programming.dev 3 points 1 month ago

A pet project... A web novel publishing platform. It's very fancy: Uses yjs (CRDTs) for collaborative editing, GSAP for special effects (that authors can use in their novels), and it's built on Vue 3 (with Vueuse and PrimeVue) and Python 3.13 on the backend using FastAPI.

The editor TipTap with a handful of custom extensions that the AI helped me write. I used AI for two reasons: I don't know TipTap all that well and I really want to see what AI code assist tools are capable of.

I've evaluated Claud Code (Sonnet 4.5), gpt5, gpt5-codex, gpt5-mini, Gemini 2.5 (it's such shit; don't even bother), qwen3-coder:480b, glm-4.6, gpt-oss:120b, and gpt-oss:20b (running locally on my 4060 Ti 16GB). My findings thus far:

  • Claude Code: Fantastic and fast. It makes mistakes but it can correct its own mistakes really fast if you tell it that it made a mistake. When it cleans up after itself like that it does a pretty good job too.
  • gpt5-codex (medium) is OK. Marginally better than gpt5 when it comes to frontend stuff (vite + Typescript + oh-god-what-else-now haha). All the gpt5 (including mini) are fantastic with Python. All the gpt5 models just love to hallucinate and randomly delete huge swaths of code for no f'ing reason. It'll randomly change your variables around too so you really have to keep an eye on it. It's hard to describe the types of abominations it'll create if you let it but here's an example: In a bash script I had something like SOMEVAR="$BASE_PATH/etc/somepath/somefile" and it changed it to SOMEVAR="/etc/somepath/somefile" for no fucking reason. That change had nothing at all to do with the prompt! So when I say, "You have to be careful" I mean it!
  • gpt-oss:120b (running via Ollama cloud): Absolutely fantastic. So fast! Also, I haven't found it to make random hallucinations/total bullshit changes the way gpt5 does.
  • gpt-oss:20b: Surprisingly good! Also, faster than you'd think it'd be—even when giving it a huge refactor. This model has lead me to believe that the future of AI-assisted coding is local. It's like 90% of the way there. A few generations of PC hardware/GPUs and we won't need the cloud anymore.
  • glm-4.6 and qwen3-coder:480b-cloud: About the same as gpt5-mini. Not as fast as gpt-oss:120b so why bother? They're all about the same (for my use cases).

For reference, ALL the models are great with Python. For whatever reason, that language is king when it comes to AI code assist.

[–] riskable@programming.dev 3 points 1 month ago* (last edited 1 month ago) (5 children)

If I broke into your home, why TF would I carefully take apart your robot vacuum in order to copy your wifi credentials‽

Also, WTF other "secrets" are you storing on your robot vacuum‽

This is not a realistic attack scenario.

[–] riskable@programming.dev -5 points 1 month ago (11 children)

I'm having the opposite experience: It's been super fun! It can be frustrating though when the AI can't figure things out but overall I've found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don't provide the same level of "wow, that just worked!" Or "wow, this code is actually well-documented and readable."

Seriously: If you haven't tried Claude Code (in VS Code via that extension of the same name), you're missing out. It's really a full generation or two ahead of the other coding assistant models. It's that good.

Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn't give you enough credits and the gap between $20/month and $100/month is too large 😁

view more: ‹ prev next ›