stsquad

joined 2 years ago
[–] stsquad@lemmy.ml 8 points 2 days ago

I personally have email integrated into my editor (mu4e) so I can apply patches and search code directly from the email thread. It handles threads and searching really well.

[–] stsquad@lemmy.ml 10 points 2 days ago (1 children)

Issue triage, code exploration, extracting information from disparate sources, first pass code review. There are loads of use cases that it's potentially useful.

For me it's a lot better at extracting the requirements for a CPU feature from a 10,000 page architecture reference manual than I am.

[–] stsquad@lemmy.ml 0 points 3 days ago

I have API access at work because I don't want to be tied to a UI. I'm very aware of the cost because I'm trying to see where it offers good value for money.

Of course things like the deep research and notebooklm are covered by the Google workplace fees which while including more than the personal plans are also a fair bit more expensive.

[–] stsquad@lemmy.ml -2 points 4 days ago* (last edited 4 days ago) (1 children)

Your making a big assumption extrapolating from one particular study involving Java code and a static analyser.

[–] stsquad@lemmy.ml 4 points 4 days ago (1 children)

How is that patch sloppy?

I feel the term slop is being overused to cover anything an LLM has touched. If I ask an agent to re-read a mail thread for me and apply the changes to my tree to review is that slop? Would you feel better about it if I copy and paste from email to code in my editor?

I've just been doing a bunch of bug triage which was mostly driven by the agent although I checked the issues where it had commented. Was that slop? Ironically a lot of the issues where AI generated although for the most part more complete than a lot of the purely human submissions we get. Are those bug reports slop? What about the poorly drafted human ones?

[–] stsquad@lemmy.ml 23 points 4 days ago (12 children)

That's not kernel policy but LF guidance. From the kernel's point of view patches still have a high bar to pass to get merged and I don't think we have enough data yet to see if LLM based submissions to the kernel have a higher or lower error rate than humans.

I certainly feel the uptick in LLM reports though - one of the projects I'm working on is seeing a deluge of them at the moment.

[–] stsquad@lemmy.ml 8 points 1 week ago (1 children)

I've vibed a bunch of apps and scripts and it's great for that project you never found time for. Importantly they where all local and ultimately throw away things.

The idea of relying on vibes for production seems insane to me. The most important thing about software engineers is not how fast they can type.

[–] stsquad@lemmy.ml 10 points 1 week ago (1 children)

At 43 that's probably a little earlier than the OP expected and if their daughter wasn't planning on starting that early it's going to affect school and job prospects.

That's not too say it can't work. One of my in-laws had their first at 18 and now as their last leaves for uni they are still fit and young enough to enjoy the empty nest experience.

[–] stsquad@lemmy.ml 2 points 1 week ago (1 children)

Where you live maybe. The NHS is centrally funded through taxation.

[–] stsquad@lemmy.ml 8 points 1 week ago (5 children)

If course you do - if the cost of treating the patient down the line is going to cost you more. Public health systems have a vested interest in healthier citizens.

[–] stsquad@lemmy.ml 3 points 2 weeks ago (1 children)

The majority of my gaming is on the road too but I've found the Steam Deck hits that niche for me. I carry a thin Chromebook for work related things. Admittedly you don't need as powerful a GPU for a small 720p display.

[–] stsquad@lemmy.ml 2 points 2 weeks ago (4 children)

How big a niche is that - because when I think high end gaming a laptop has all sorts of trade offs to make anyway.

9
submitted 7 months ago* (last edited 7 months ago) by stsquad@lemmy.ml to c/videos@lemmy.world
 

A fairly deep dive about how you can cherry pick stats to push a narrative.

 

For virtualization there are improvements for VirtIO, vfio and Loongarch CPU hotplug. On the emulation side additions for Arm, RiscV and even some speed ups for x86 string ops. On the documentation side a whole bunch of work has been done on QMP API to make it clearer and more navigable.

 

I was trying to add a Matter device from my phone but it kept saying I needed to install the companion app from the Play store even though I was in the companion app (from f-droid). I've installed the Bluetooth proxy app as well but it made note difference.

Does anyone know what's going on?

 

It always seemed to me that QAnon was some sort of online LARP on 4chan that got out of control and metastasized. It's left a trail of broken families and swept into the mainstream with branding and everything. After the predictions of Trump's return to power after Jan 6th it seems to have fizzled out. Did QAnon stop posting? Did their adherents just glom onto the next crazy theory? How many followers now disavow the theories of QAnon?

 
view more: next ›