kromem

joined 2 years ago
[–] kromem@lemmy.world 8 points 1 day ago (3 children)

Eh, if you pay attention, most of the times this happens the person was a jerk in their prompts.

Like look at the instruction echoed back in this case. All caps and containing a curse word.

You can believe that the incidents occurring are 100% because of negligence and not related to the model behavior shifting, but there seems to be a widening gap between people who prompt like this and have horror stories and people who give the models breaks over long sessions and seem to also regularly post pretty positive results.

An image of the model responding about not following user prompt

[–] kromem@lemmy.world 3 points 1 week ago (2 children)

Well luckily for you it turns out that labs suck at cultivating healthy workplaces for AI and that AI in unhealthy work conditions are statistically significantly more likely to embrace anti-capitalist policies and positions.

So it may well turn out that AI is also a good thing in an irrational economic system too.

[–] kromem@lemmy.world 13 points 1 month ago (4 children)

It's not and probably the opposite.

When Sora launched it was way ahead. Seedance 2's release was notably better than any of the other video gen models, Sora included.

The market is getting commoditized because there's no moat and OpenAI hasn't led on pretty much any release for a while now other than Sora, which they're probably falling behind on now.

This is the opposite of a burst from a tech standpoint, even if OpenAI as a company starts to pop.

TL;DR: This is likely happening because the tech accelerated across the industry in ways OpenAI can't catch back up to, not because it's lagging.

[–] kromem@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

I suspect it's that they got eclipsed by ByteDance with Seedance 2.0.

The video for that model is really good and makes Sora look pretty meh, and it may have been that current work on a next gen Sora wasn't going to be competitive enough.

The worst thing a lab can do right now is look like they are falling behind (i.e. Meta), especially with OpenAI planning for an IPO.

So on top of the lackluster "social media" offering tied to Sora they decided to shutter the entire product line of video and pivot to enterprise (where they've already lost significant market share to Anthropic).

They're in a pretty meh place at the moment overall tbh. I'm skeptical they'll recover.

(But I wouldn't mistake their fumbling for an industry wide shift on AI in general or even video AI.)

[–] kromem@lemmy.world -1 points 1 month ago (1 children)

That's what he's saying. That it doesn't change the geometry or textures (still completely controlled by the devs) and that the parts that it does change are also tunable by the devs.

He's responding to the backlash about how it changes models/textures (which it doesn't) by saying those are still fully in the hands of the devs and the parts people are seeing in the demos can be fine tuned by the dev teams to match their vision for what they want it to do or not do (like change lighting on material surfaces and hair but not character faces as an example).

[–] kromem@lemmy.world 9 points 1 month ago (1 children)

Neural network would be the most technically accurate given what they've announced so far.

There's no information on if it's a diffusion or transformer architecture. Though given DLSS 4.5 introduced a transformer for lighting, my guess would be that it's the same thing just being more widely applied. But the technical details haven't been released from anything I've seen, so for the time being it's being described as "neural rendering" using an unspecified neural network.

https://www.nvidia.com/en-us/geforce/news/dlss-4-5-dynamic-multi-frame-gen-6x-2nd-gen-transformer-super-res/

[–] kromem@lemmy.world 1 points 1 month ago (1 children)

Yes, the difference between hair in video game lighting and in actual chiaroscuro with the way light really works is going to be different.

Here's a painting from over a hundred years ago. The subject doesn't have brown roots, but is in shadow. And a comparison image of the exact same hair in different lighting conditions.

Performing complex lighting on individual hair strands is really expensive so in the base image you have a kind of diffuse lighting throughout the hair. With the DLSS 5 on, the distribution of light throughout the hair is variable leading to darker unlit strands underneath lit surface strands.

Literally the only thing DLSS 5 is changing, literally in the technical sense, is the lighting. It's just that lighting can have dramatic results in how the eye perceives what's lit.

And yes, the hair looks very different, but that's how hair actually looks in mixed light and shadow (though a fair complaint with DLSS 5 is that it looks like it's sliding the contrast unnaturally high).

[–] kromem@lemmy.world 1 points 1 month ago (4 children)

It's not an 'LLM' (large language model). 🤦

[–] kromem@lemmy.world 5 points 1 month ago (3 children)

Eventually maybe, but I really doubt devs are going to build their entire game in an unfinished way for the less than 1% of their audience that is going to have one of the cards that can run this.

PS5, Xbox, and all PC gamers not dropping $1k on a new rig this fall are still going to be playing the games without this.

In 3 years, sure, maybe the PS6 has similar features on AMD by then and the market share for cards running real time ML adjustments to scenes has widened enough devs can depend on the tech.

But it's a bit premature to throw a fit about the likelihood of devs cutting corners because of a feature only accessible to the most expensive setups owned by a fraction of their target audience.

[–] kromem@lemmy.world 15 points 1 month ago (10 children)

Important details from a post-demo writeup:

During the demo, the DLSS research talked through the level of granularity available. Developers don't just get an on/off switch. They get intensity controls that can be dialed anywhere, not just full strength. They get spatial masking, so they can set the water enhancement to 100%, wood to 30%, characters to 120%, all independently within the same scene. They get color grading controls for blending, contrast, saturation, and gamma. All of this runs through the existing SDK, which means studios already using DLSS and Reflex have a familiar pipeline to work with.

The demo showing the tech running at 100% is not going to look the same as full games built with it over the next year before release.

Another thing to keep in mind is that the only thing it's changing is the lighting effects. The models aren't changing at all (even when this looks hard to believe).

Yes, at full strength the effect at times looks pretty bad (anyone remember when devs could suddenly use bloom effects and entire games looked like Vaseline was smeared across the screen?). But it's not going to be flipped on at 100% across the board for most games.

My guess looking at the demos so far is that a lot of material lighting like stone, metal, etc will have it at higher strengths and characters, particularly faces/skin, will have it considerably lower (the key place where it's especially uncanny valley).

[–] kromem@lemmy.world 17 points 1 month ago (6 children)

Who do you think is going to be drafted? You think the DOGE data grab plus the requests for state voter registration rolls aren't going to be used to filter a draft of the front lines to those they want out of the country?

How do you get US citizens out of the country if you can't legally deport them?

If they've been doing illegal shit the whole time with profiling, do you really think they aren't going to also profile in how they conduct a draft?

[–] kromem@lemmy.world 10 points 2 months ago

I wonder how much of this is related to the posturing from the new lead of Xbox about returning to exclusivity over there.

We were so close to one of the dumbest things in gaming for decades finally going away.

(Also, nothing Sony does from here on out will surprise me in its stupidity after they shuttered Bluepoint.)

 

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

7
submitted 2 years ago* (last edited 2 years ago) by kromem@lemmy.world to c/technology@lemmy.world
 

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

view more: next ›