bamboo

joined 1 year ago
[–] bamboo@lemm.ee 1 points 5 months ago

The MachE is nearly a third larger than the bolt by weight, an already large car, as well as being larger than the bolt in every physical dimension, even if not by much (except for length, where there’s a nearly 2 foot difference). I just want a small compact car with enough range to get me to work and back and run a few errands. In 2000 most cars were reasonable sizes even in the US, but today you can find anything reasonably sized new. I don’t want an SUV or a “crossover”. In other coutures like China these vehicles are being built, but US politicians would rather protect the profits of car companies producing these massive, inefficient, unsustainable monster trucks for people to take to the office and back.

[–] bamboo@lemm.ee 1 points 5 months ago (7 children)

The only EV I can find from an American brand that is in any way appealing is the Bolt. Everything else is a giant truck or SUV, and to be honest I don’t feel safe driving such a huge piece of metal, and I don’t have the money to justify buying one. No American options are affordable or reasonably sized. The US is doing EVs in possibly the most unsustainable way possible.

[–] bamboo@lemm.ee 55 points 5 months ago (54 children)

Supposedly they want us all in EVs, but American manufacturers aren’t producing shit except for Tesla which are safety hazards, and they effectively banned Chinese competition that could have actually accomplished it. US car manufacturers will likely ignore these new standards by pushing more “light trucks” that are exempt.

[–] bamboo@lemm.ee 0 points 5 months ago

ChatGPT isn’t gonna replace software engineers anytime soon. It can increase productivity though, that’s the value LLMs provide. If someone made a shitty pull request filled with obvious ChatGPT output, that’s on them and not the technology. Blaming ChatGPT for a programmer’s bad code is like blaming the autocomplete in their editor for bad code: just because the editor suggests it doesn’t mean you have or should accept it if it’s wrong.

[–] bamboo@lemm.ee -1 points 5 months ago (1 children)

OpenAI is a non-profit. Further, US tech companies usually take many years to become profitable. It’s called reinvesting revenue, more companies should be doing that instead of stock buybacks.

Let’s suppose hosted LLMs like ChatGPT aren’t financially sustainable and go bust though. As a user, you can also just run them locally, and as smaller models improve, this is becoming more and more popular. It’s likely how Apple will be integrating LLMs into their devices, at least in part, and Microsoft is going that route with “Copilot+ PCs” that start shipping next week. Integration aside, you can run 70B models on an overpriced $5k MacBook Pro today that are maybe half as useful as ChatGPT. The cost to do so exceeds the cost of a ChatGPT subscription, but to use my numbers from before, a $5k MacBook Pro running llama 3 70B would have to save an engineer one hour per week to pay for itself in the first year. Subsequent years only the electrical costs would matter, which for a current gen MacBook Pro would be about equivalent to the ChatGPT subscription in expensive energy markets like Europe, or half that or less in the US.

In short, you can buy overpriced Apple hardware to run your LLMs, do so with high energy prices, and it’s still super cheap compared to a single engineer such that saving 1 hour per week would still pay for itself in the first year.

[–] bamboo@lemm.ee -1 points 5 months ago (5 children)

It can be quite profitable. A ChatGPT subscription is $20/m right now, or $240/year. A software engineer in the US is between $200k and $1m with all benefits and support costs considered. If that $200k engineer can use ChatGPT to save 2.5 hours in a year, then it pays for itself.

[–] bamboo@lemm.ee 22 points 5 months ago (10 children)

I don’t think generative AI is going anywhere anytime soon. The hype will eventually die down, but it’s already proved its usefulness in many tasks.

[–] bamboo@lemm.ee 39 points 5 months ago (5 children)

I wouldn’t be so quick to write this off. The software that is overwhelmingly dominant here is RealPage, which is allowing landlords to act as a cartel to set prices. This isn’t just about taking down realpage, it’s also potentially about setting a precedent about using software to obscure cartel price fixing behavior.

[–] bamboo@lemm.ee 1 points 5 months ago (1 children)

This is a false dichotomy. While it’s true that the president cannot massively reform immigration policy or adjust the budget unilaterally, they do have a lot of sway in how to handle the situation by setting the priorities of federal agencies and working with aligned NGOs. Instead of restricting immigration, why not work with cooperative state and local governments, as well as NGOs to make resources more available to those who need it? He could also direct agencies like CBP and ICE to stop wasting their money detaining people and ruining lives, and instead focus on helping those not in compliance go through the legal process instead, and provide personnel support to aid those crossing the border using their existing logistical resources.

Just limiting immigration altogether is the lazy approach and helps no one.

[–] bamboo@lemm.ee 4 points 5 months ago (1 children)

This is going to cause a bunch of needless pain, suffering, and death. And the people that Biden thinks he’ll will over with this policy won’t vote him no matter what.

[–] bamboo@lemm.ee 1 points 5 months ago

If something like that were to work, a lot of effort would need to be put into minimizing the UI friction. I could see something like: uploaders add topic tags to their videos, and an AI runs in the background to generate and apply new tags based on the content (most people would not understand how to properly tag content). An AI would also be used to create a graph of related tags, where similar or closely related tags are nodes joined by an edge. Then, on first login the user is prompted to pick some tags to start with. Over time, the client uses the adjacent tag graph to fine-tune users’ tags, on device. The idea here is that we could get a decent algorithm that can recommend new stuff based on what the user watches, but keep that data processing of user-specific content local. Then, the client would also have an option the user could enable that would contribute their client’s tag information back to the global tag graph, improving the global tag graph for everybody. This data could also be combined with other users data at the instance level to somewhat anonymize the data, assuming it is a large multi-user instance. If you were to host a single user instance, you’d probably not want to contribute to the global tag graph unless you’re ok with your tag preferences being public.

[–] bamboo@lemm.ee 3 points 5 months ago

It’s a bit tricky but I think a privacy preserving algorithm is possible. Simply put, the more data available, the better an algorithm can be.

view more: ‹ prev next ›