coolin

joined 1 year ago
[โ€“] coolin@beehaw.org 1 points 10 months ago (1 children)

I suppose having worked with LLMs a whole bunch over the past year I have a better sense of what I meant by "automate high level tasks".

I'm talking about an assistant where, let's say you need to edit a podcast video to add graphics and cut out dead space or mistakes that you corrected in the recording. You could tell the assistant to do that and it would open the video in Adobe Premiere pro, do the necessary tasks, then ask you to review it to check if it made mistakes.

Or if you had an issue with a particular device, e.g. your display, the assistant would research the issue and perform the necessary steps to troubleshoot and fix the issue.

These are currently hypothetical scenarios, but current GPT4 can already perform some of these tasks, and specifically training it to be a desktop assistant and to do more agentic tasks will make this a reality in a few years.

It's additionally already useful for reading and editing long documents and will only get better on this end. You can already use an LLM to query your documents and give you summaries or use them as instructions/research to aid in performing a task.

[โ€“] coolin@beehaw.org 5 points 10 months ago (5 children)

Current LLMs are manifestly different from Cortana (๐Ÿคข) because they are actually somewhat intelligent. Microsoft's copilot can do web search and perform basic tasks on the computer, and because of their exclusive contract with OpenAI they're gonna have access to more advanced versions of GPT which will be able to do more high level control and automation on the desktop. It will 100% be useful for users to have this available, and I expect even Linux desktops will eventually add local LLM support (once consumer compute and the tech matures). It is not just glorified auto complete, it is actually fairly correlated with outputs of real human language cognition.

The main issue for me is that they get all the data you input and mine it for better models without your explicit consent. This isn't an area where open source can catch up without significant capital in favor of it, so we have to hope Meta, Mistral and government funded projects give us what we need to have a competitor.

[โ€“] coolin@beehaw.org 0 points 10 months ago

Yeah, I think Nix is a good concept but I feel like 99% of the config work could be managed by the OS itself and a GUI to change everything else. I also feel like flakes should be the default, not this weird multiple systems thing they have. I also wish most apps would have a sandbox built in, because nix apps would then rival flatpak and, if ported to Windows, become a universal package manager. Overall good concept but not there yet.

[โ€“] coolin@beehaw.org 48 points 1 year ago (8 children)

"I use Signal to hide my data from the US government and big tech"

"Wait, you seriously still use Reddit? Everyone switched to the Fediverse!"

"Wow, can't believe you use Apple! Android is so much better."

No one who isn't terminally online understands what these statements mean. If you want people to use something else, don't make it about privacy and choose something with fancy buttons and cool features that looks close enough to what they have. They do not care about privacy and are literally of the mindset "if I have nothing to hide I have nothing to fear". They sleep well at night.

[โ€“] coolin@beehaw.org 1 points 1 year ago

Hello, kids! Pirates are very bad! Never use qBittorent to download copyrighted material, and certainly do NOT connect it to a VPN to avoid getting caught. Additionally, you should also NEVER download illegal material via an https connection because it is fully encrypted and you won't get caught!

[โ€“] coolin@beehaw.org 6 points 1 year ago

This is another reminder that the anomalous magnetic moment of the muon was recalculated by two different groups using higher precision lattice QCD techniques and wasn't found to be significantly different from the Brookhaven/Fermilab "discrepancy". More work needs to be done to check for errors in the original and newer calculations, but it seems quite likely to me that this will ultimately confirm the standard model exactly as we know it and not provide any new insight or the existence of another force particle.

My hunch is that unknown particles like dark matter rely on a relatively simple extension of the standard model (e.g. supersymmetry, axioms, etc.) and the new physics out there that combines gravity and QM is something completely different from what we are currently working on and can't be observed with current colliders or any other experiments on Earth.

So probably we will continue finding nothing interesting for quite some time until we can get a large ML model crunching every single possible model to check for fit on the data, and hopefully derive some better insight from there.

Though I'm not an expert and I'm talking out of my ass so take this all with a grain of salt.

[โ€“] coolin@beehaw.org 2 points 1 year ago (1 children)

Yeah there's no way a viable Linux phone could be made without the ability to run Android apps.

I think we're probably at least a few years away from being able to daily drive Linux on modern phones with functioning things like NFC payments and a decent native app collection. It's definitely coming but it has far less momentum than even the Linux desktop does.

[โ€“] coolin@beehaw.org 2 points 1 year ago

Smh my head, Linux is too mainstream now!!! How will I be a cool hacker boy away from society if everyone else uses it!!!!!!!

[โ€“] coolin@beehaw.org 4 points 1 year ago (1 children)

For the love of God please stop posting the same story about AI model collapse. This paper has been out since May, been discussed multiple times, and the scenario it presents is highly unrealistic.

Training on the whole internet is known to produce shit model output, requiring humans to produce their own high quality datasets to feed to these models to yield high quality results. That is why we have techniques like fine-tuning, LoRAs and RLHF as well as countless datasets to feed to models.

Yes, if a model for some reason was trained on the internet for several iterations, it would collapse and produce garbage. But the current frontier approach for datasets is for LLMs (e.g. GPT4) to produce high quality datasets and for new LLMs to train on that. This has been shown to work with Phi-1 (really good at writing Python code, trained on high quality textbook level content and GPT3.5) and Orca/OpenOrca (GPT-3.5 level model trained on millions of examples from GPT4 and GPT-3.5). Additionally, GPT4 has itself likely been trained on synthetic data and future iterations will train on more and more.

Notably, by selecting a narrow range of outputs, instead of the whole range, we are able to avoid model collapse and in fact produce even better outputs.

[โ€“] coolin@beehaw.org 4 points 1 year ago

I've never used Manjaro but the perception I get from it is that it is a noob friendly distro with good GUI and config (good) but then catastrophically fails when monkeying around with updates and the AUR. This is a pain for technical users and a back-to-Windows experience for the people it's targeted towards. Overall, significantly worse than EndeavorOS or plain 'ol vanilla Arch Linux.

[โ€“] coolin@beehaw.org 17 points 1 year ago* (last edited 1 year ago)

We have no moat and neither does OpenAI is the leaked document you're talking about

It's a pretty interesting read. Time will tell if it's right, but given the speed of advancements that can be stacked on top of each other that I'm seeing in the open source community, I think it could be right. If open source figured out scalable distributed training I think it's Joever for AI companies.

view more: next โ€บ