brucethemoose

joined 1 year ago
[–] brucethemoose@lemmy.world 1 points 18 hours ago

Also a crime. Not just a great game in their niche, but a long history of them.

[–] brucethemoose@lemmy.world 2 points 18 hours ago

Never underestimate Phil Spencer.

[–] brucethemoose@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

The junocam page has raw shots from the actual device: https://www.msss.com/all_projects/junocam.php

Caption of another:

Multiple images taken with the JunoCam instrument on three separate orbits were combined to show all areas in daylight, enhanced color, and stereographic projection.

In other words, the images you see are heavily processed composites...

Dare I say, "AI enhanced," as they sometimes do use ML algorithms for astronomy. Though ones designed for scientific usefulness, of course, and mostly for pattern identification in bulk data AFAIK.

[–] brucethemoose@lemmy.world 15 points 2 days ago* (last edited 2 days ago) (3 children)

...iOS forces uses Apple services including getting apps through Apple...

Can't speak to the rest of the claims, but Android practically does too. If one has to sideload an app, you've lost 99% of users, if not more.

It makes me suspect they're not talking about the stock systems OEMs ship.

Relevant XKCD: https://xkcd.com/2501/

[–] brucethemoose@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

Aren't fighters dead?

Look, I like cool planes, but military scenarios where 5-500 drones are worse than a single mega expensive jet not already covered by existing planes/missiles seem... very rare.

Look at Ukraine's drone ops. I mean, hell, imagine if the DoD put their budget into that.

[–] brucethemoose@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

Well, exactly. Trump apparently has a line to Apple and could probably get Tim to take it down.

[–] brucethemoose@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

Yep.

It's not the best upscale TBH.

Hence I brought up redoing it with some of the same techniques (oldschool vapoursynth processing + manual pixel peeping) mixed with more modern deinterlacing and better models than Waifu2X. Maybe even a finetune? Ban.

[–] brucethemoose@lemmy.world 28 points 3 days ago* (last edited 3 days ago) (7 children)

How does this make any sense?

Shouldn't they be suing Apple to take it down if they don't like it? I know they just want to weaken press, but it feels like an especially weak excuse.

[–] brucethemoose@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

Pro is 120hz.

But they are expensive as heck. I only got the 16 Plus because its a carrier loss leader, heh.

And wouldn't fix some of my other quibbles with iOS's inflexibility. My ancient jailbroken iPhone 4 was more customizable than now, and Apple is still slowly, poorly implementing features I had a decade ago. It's mind boggling, and jailbreaking isn't a good option anymore.

[–] brucethemoose@lemmy.world 0 points 3 days ago

Nah I meant the opposite. Journalistic integrity was learned through long, hard history.

Now that traditional journalism is dying, its like the streamer generation has to learn it from scratch, heh.

[–] brucethemoose@lemmy.world 7 points 3 days ago* (last edited 3 days ago) (2 children)

I got banned from a fandom subreddit for pointing out that a certain fan remaster was (partially, with tons of manual work) made with ML models. Specifically with oldschool GANs, and some smaller, older models as part of a deinterlacing pipeline, from before 'generative AI' was even a term.

 

As to why it (IMO) qualifies:

"My children are 22, 25, and 27. I will literally fight ANYONE for their future," Greene wrote. "And their future and their entire generation's future MUST be free of America LAST foreign wars that provoke terrorists attacks on our homeland, military drafts, and NUCLEAR WAR."

Hence, she feels her support is threatening her kids.

"MTG getting her face eaten" was not on my 2025 bingo card, though she is in the early stage of face eating.

 

"It's not politically correct to use the term, 'Regime Change' but if the current Iranian Regime is unable to MAKE IRAN GREAT AGAIN, why wouldn't there be a Regime change??? MIGA!!

 
  • The IDF is planning to displace close to 2 million Palestinians to the Rafah area, where compounds for the delivery of humanitarian aid are being built.
  • The compounds are to be managed by a new international foundation and private U.S. companies, though it's unclear how the plan will function after the UN and all aid organizations announced they won't take part
 

So I had a clip I wanted to upload to a lemmy comment:

  • Tried it as an (avc) mp4... Failed.
  • OK, too big? I shrink it to 2MB, then 1MB. Failed.
  • VP9 Webm maybe? 2MB, 1MB, failed. AV1? Failed.
  • OK, fine, no video. Lets try an animated AVIF. Failed. It seems lemmy doesn't even take static AVIF images
  • WebP animation then... Failed. Animated PNG, failed.

End result, I have to burden the server with a massive, crappy looking GIF after trying a dozen formats. With all due respect, this is worse than some aging service like Reddit that doesn't support new media formats.

For reference, I'm using the web interface. Is this just a format restriction of lemmy.world, or an underlying software support issue?

 

53% of Americans approve of Trump so far, according to a newly released CBS News/YouGov poll conducted Feb. 5 to 7, while 47% disapproved.

A large majority, 70%, said he was doing what he promised in the campaign, per the poll that was released on Sunday.

Yes, but: 66% said he was not focusing enough on lowering prices, a key campaign trail promise that propelled Trump to the White House.

44% of Republicans said Musk and DOGE should have "some" influence, while just 13% of Democrats agreed.

 

Here's the Meta formula:

  • Put a Trump friend on your board (Ultimate Fighting Championship CEO Dana White).
  • Promote a prominent Republican as your chief global affairs officer (Joel Kaplan, succeeding liberal-friendly Nick Clegg, president of global affairs).
  • Align your philosophy with Trump's on a big-ticket public issue (free speech over fact-checking).
  • Announce your philosophical change on Fox News, hoping Trump is watching. In this case, he was. "Meta, Facebook, I think they've come a long way," Trump said at a Mar-a-Lago news conference, adding of Kaplan's appearance on the "Fox and Friends" curvy couch: "The man was very impressive."
  • Take a big public stand on a favorite issue for Trump and MAGA (rolling back DEI programs).
  • Amplify that stand in an interview with Fox News Digital. (Kaplan again!)
  • Go on Joe Rogan's podcast and blast President Biden for censorship.
372
submitted 6 months ago* (last edited 6 months ago) by brucethemoose@lemmy.world to c/politics@lemmy.world
 

Reality check: Trump pledged to end the program in 2016.

Called it. When push comes to shove, Trump is always going to side with the ultra-rich.

 

Trump, who has remained silent thus far on the schism, faces a quickly deepening conflict between his richest and most powerful advisors on one hand, and the people who swept him to office on the other.

All this is stupid. But I know one thing:

Trump is a billionaire.

And I predict his followers are going to learn who he’ll side with when push comes to shove.

Also, Bannon’s take is interesting:

Bannon tells Axios he helped kick off the debate with a now-viral Gettr post earlier this month calling out a lack of support for the Black and Hispanic communities in Big Tech.

 

I think the title explains it all… Even right wing influencers can have their faces eaten. And Twitter views are literally their livelihood.

Trump's conspiracy-minded ally Laura Loomer, New York Young Republican Club president Gavin Wax and InfoWars host Owen Shroyer all said their verification badges disappeared after they criticized Musk's support for H1B visas, railed against Indian culture and attacked Ramaswamy, Musk's DOGE co-chair.

 

Maybe even 32GB if they use newer ICs.

More explanation (and my source of the tip): https://www.pcgamer.com/hardware/graphics-cards/shipping-document-suggests-that-a-24-gb-version-of-intels-arc-b580-graphics-card-could-be-heading-to-market-though-not-for-gaming/

Would be awesome if true, and if it's affordable. Screw Nvidia (and, inexplicably, AMD) for their VRAM gouging.

 

I see a lot of talk of Ollama here, which I personally don't like because:

  • The quantizations they use tend to be suboptimal

  • It abstracts away llama.cpp in a way that, frankly, leaves a lot of performance and quality on the table.

  • It abstracts away things that you should really know for hosting LLMs.

  • I don't like some things about the devs. I won't rant, but I especially don't like the hint they're cooking up something commercial.

So, here's a quick guide to get away from Ollama.

  • First step is to pick your OS. Windows is fine, but if setting up something new, linux is best. I favor CachyOS in particular, for its great python performance. If you use Windows, be sure to enable hardware accelerated scheduling and disable shared memory.

  • Ensure the latest version of CUDA (or ROCm, if using AMD) is installed. Linux is great for this, as many distros package them for you.

  • Install Python 3.11.x, 3.12.x, or at least whatever your distro supports, and git. If on linux, also install your distro's "build tools" package.

Now for actually installing the runtime. There are a great number of inference engines supporting different quantizations, forgive the Reddit link but see: https://old.reddit.com/r/LocalLLaMA/comments/1fg3jgr/a_large_table_of_inference_engines_and_supported/

As far as I am concerned, 3 matter to "home" hosters on consumer GPUs:

  • Exllama (and by extension TabbyAPI), as a very fast, very memory efficient "GPU only" runtime, supports AMD via ROCM and Nvidia via CUDA: https://github.com/theroyallab/tabbyAPI

  • Aphrodite Engine. While not strictly as vram efficient, its much faster with parallel API calls, reasonably efficient at very short context, and supports just about every quantization under the sun and more exotic models than exllama. AMD/Nvidia only: https://github.com/PygmalionAI/Aphrodite-engine

  • This fork of kobold.cpp, which supports more fine grained kv cache quantization (we will get to that). It supports CPU offloading and I think Apple Metal: https://github.com/Nexesenex/croco.cpp

Now, there are also reasons I don't like llama.cpp, but one of the big ones is that sometimes its model implementations have... quality degrading issues, or odd bugs. Hence I would generally recommend TabbyAPI if you have enough vram to avoid offloading to CPU, and can figure out how to set it up. So:

This can go wrong, if anyone gets stuck I can help with that.

  • Next, figure out how much VRAM you have.

  • Figure out how much "context" you want, aka how much text the llm can ingest. If a models has a context length of, say, "8K" that means it can support 8K tokens as input, or less than 8K words. Not all tokenizers are the same, some like Qwen 2.5's can fit nearly a word per token, while others are more in the ballpark of half a work per token or less.

  • Keep in mind that the actual context length of many models is an outright lie, see: https://github.com/hsiehjackson/RULER

  • Exllama has a feature called "kv cache quantization" that can dramatically shrink the VRAM the "context" of an LLM takes up. Unlike llama.cpp, it's Q4 cache is basically lossless, and on a model like Command-R, an 80K+ context can take up less than 4GB! Its essential to enable Q4 or Q6 cache to squeeze in as much LLM as you can into your GPU.

  • With that in mind, you can search huggingface for your desired model. Since we are using tabbyAPI, we want to search for "exl2" quantizations: https://huggingface.co/models?sort=modified&search=exl2

  • There are all sorts of finetunes... and a lot of straight-up garbage. But I will post some general recommendations based on total vram:

  • 4GB: A very small quantization of Qwen 2.5 7B. Or maybe Llama 3B.

  • 6GB: IMO llama 3.1 8B is best here. There are many finetunes of this depending on what you want (horny chat, tool usage, math, whatever). For coding, I would recommend Qwen 7B coder instead: https://huggingface.co/models?sort=trending&search=qwen+7b+exl2

  • 8GB-12GB Qwen 2.5 14B is king! Unlike it's 7B counterpart, I find the 14B version of the model incredible for its size, and it will squeeze into this vram pool (albeit with very short context/tight quantization for the 8GB cards). I would recommend trying Arcee's new distillation in particular: https://huggingface.co/bartowski/SuperNova-Medius-exl2

  • 16GB: Mistral 22B, Mistral Coder 22B, and very tight quantizations of Qwen 2.5 34B are possible. Honorable mention goes to InternLM 2.5 20B, which is alright even at 128K context.

  • 20GB-24GB: Command-R 2024 35B is excellent for "in context" work, like asking questions about long documents, continuing long stories, anything involving working "with" the text you feed to an LLM rather than pulling from it's internal knowledge pool. It's also quite goot at longer contexts, out to 64K-80K more-or-less, all of which fits in 24GB. Otherwise, stick to Qwen 2.5 34B, which still has a very respectable 32K native context, and a rather mediocre 64K "extended" context via YaRN: https://huggingface.co/DrNicefellow/Qwen2.5-32B-Instruct-4.25bpw-exl2

  • 32GB, same as 24GB, just with a higher bpw quantization. But this is also the threshold were lower bpw quantizations of Qwen 2.5 72B (at short context) start to make sense.

  • 48GB: Llama 3.1 70B (for longer context) or Qwen 2.5 72B (for 32K context or less)

Again, browse huggingface and pick an exl2 quantization that will cleanly fill your vram pool + the amount of context you want to specify in TabbyAPI. Many quantizers such as bartowski will list how much space they take up, but you can also just look at the available filesize.

  • Now... you have to download the model. Bartowski has instructions here, but I prefer to use this nifty standalone tool instead: https://github.com/bodaay/HuggingFaceModelDownloader

  • Put it in your TabbyAPI models folder, and follow the documentation on the wiki.

  • There are a lot of options. Some to keep in mind are chunk_size (higher than 2048 will process long contexts faster but take up lots of vram, less will save a little vram), cache_mode (use Q4 for long context, Q6/Q8 for short context if you have room), max_seq_len (this is your context length), tensor_parallel (for faster inference with 2 identical GPUs), and max_batch_size (parallel processing if you have multiple user hitting the tabbyAPI server, but more vram usage)

  • Now... pick your frontend. The tabbyAPI wiki has a good compliation of community projects, but Open Web UI is very popular right now: https://github.com/open-webui/open-webui I personally use exui: https://github.com/turboderp/exui

  • And be careful with your sampling settings when using LLMs. Different models behave differently, but one of the most common mistakes people make is using "old" sampling parameters for new models. In general, keep temperature very low (<0.1, or even zero) and rep penalty low (1.01?) unless you need long, creative responses. If available in your UI, enable DRY sampling to tamp down repition without "dumbing down" the model with too much temperature or repitition penalty. Always use a MinP of 0.05 or higher and disable other samplers. This is especially important for Chinese models like Qwen, as MinP cuts out "wrong language" answers from the response.

  • Now, once this is all setup and running, I'd recommend throttling your GPU, as it simply doesn't need its full core speed to maximize its inference speed while generating. For my 3090, I use something like sudo nvidia-smi -pl 290, which throttles it down from 420W to 290W.

Sorry for the wall of text! I can keep going, discussing kobold.cpp/llama.cpp, Aphrodite, exotic quantization and other niches like that if anyone is interested.

view more: next ›