new_name_who_dis_

joined 1 year ago
[–] new_name_who_dis_@alien.top 1 points 11 months ago

What does prompt engineering have to do with SEO? I know very little about SEO, but didn't tricks to improve SEO by way of changing the text of your website become outdated in like the early 2000s?

[–] new_name_who_dis_@alien.top 1 points 11 months ago

That's kind of what my original comment was all about.

[–] new_name_who_dis_@alien.top 1 points 11 months ago (2 children)

Like what?

I mean there's always sketchy papers because of p-hacking. But I doubt that there's papers that don't have a proper evaluation at all.

[–] new_name_who_dis_@alien.top 1 points 11 months ago (4 children)

Well it depends on what you are building. If you are actually doing ML research, i.e. you want to publish papers, people are doing evaluation and you won't get published without it. There's a bunch of tricks that have been used to evaluate generative models that you can find in these papers. I remember in grad school our TA made us read a paper and then in the discussion he said that he thought the method they proposed was not good at all, he wanted us to read it to learn about their evaluation metric which he deemed "very clever".

[–] new_name_who_dis_@alien.top 1 points 11 months ago

It's basically impossible to be completely caught up. So don't feel bad. I am not really sure it's all that useful either, you should know of technologies / techniques / architectures and what they are used for. You don't need to know the details of how they work or how to implement them from scratch. Just being aware means you know what to research when the appropriate problem comes your way.

Also a lot of the newest stuff is just hype and won't stick. If you've been in ML research since 2017 (when transformers came out) you should know that. How many different CNN architectures came out between Resnet in 2016 (or 15?) and now? and still most people simply use Resnet.

[–] new_name_who_dis_@alien.top 1 points 11 months ago

The money always wins...

[–] new_name_who_dis_@alien.top 1 points 11 months ago (3 children)

Lol I always wanted to join OpenAI to work under Sutskevar. When Altman joined in 2019, my first thought was why is the YC guy running arguably the most important AI lab in the world.

He's probably a talent magnet, but he's not a talent magnet for ML researchers. Probably for tech people yes but more of the startup engineer types who are proud of not going to school. Not the academic researcher types who spent 10 years at university doing PhDs and PostDocs in ML.

[–] new_name_who_dis_@alien.top 1 points 11 months ago (2 children)

I mean it's obviously a power play. But the characters involved are this:

  • Altman: entrepreneur extraordinaire, head of YC, CEO, investor, etc.

  • Brockman: entrepreneur extraordinaire, former CTO of Stripe (e-commerce infra company), investor, etc.

  • Nadella: CEO of Microsoft, nuff said.

  • Sutskevar: researcher extraordinaire, academic

  • McCauley: RAND Corp scientist (no idea what that means but it has scientist in the name)

  • Toner: Georgetown academic.

All of the tech entrepreneur people and investors -- the people who are obsessed with just making money -- are on one side. And all of the academic, science people are on the other side. Recall that OpenAI was founded by a bunch of researchers and academics who explicitly made it a nonprofit, which Altman changed once he became CEO in 2019.

Idk if the academics really care about the betterment of mankind, but I know for a fact that the other guys' are driven by pure greed.

[–] new_name_who_dis_@alien.top 1 points 11 months ago (5 children)

Neither Sam or Greg have ML background so idk why they are being hired for an essentially chief scientist role in this new lab. Altman basically has the skillset of a CEO, he's good at attracting talent, marketing himself and his company, and raising money. Not the skillset of a research lead. They'll probably stay just long enough to market it a bit and then move onto the next thing.

[–] new_name_who_dis_@alien.top 1 points 11 months ago

Hintons paper was famous not because he claimed to invent backprop but because (iirc) it was the first instance of it being used to optimize neural nets.

Like the transformer paper is famous but it didn’t invent attention—just applied it in a novel way.

[–] new_name_who_dis_@alien.top 1 points 11 months ago (1 children)

I’m not good at this myself (or maybe I’m just lazy) but my friend who’s better at this than me told me that your resume should have all of the buzzwords used in the job posting. That’s how you get through the filters etc.

[–] new_name_who_dis_@alien.top 1 points 1 year ago

That's not really an ML problem. It's a search problem. You'd need an index (or at least db) of all e-commerce products. Then you can potentially use ML to find competitors, but I feel like for this you don't need ML


a simple if statement for the two products being of the same type will suffice.

view more: next ›