I know that part.
The other fork has existed for a long while.
I know that part.
The other fork has existed for a long while.
Is this your personal phone? If your work were to dictate what you are allowed to install on your personal phone, that'd be a serious overstepping of bounds.
Perhaps you can sneak in f-droid via adb install
and give it app installation permissions via ADB though.
What's the history behind this? Why could the changes be done upstream, necessitating a fork?
According to the author, that has happened quite a while ago and we're now at the next step.
the fact that the two programs communicate using standard protocols does not mean they are one program for purposes of GPLv3
The fact that they would even think about attempting to subvert the GPL (much less actually pulling through with it) makes me think they have stopped being an open source company a while ago.
It would break a lot, require a new API, and devs reworking a lot of programs.
As I understand it, this would have been a perfectly backwards compatible change. You'd only get the events if you explicitly asked for them.
The Immich app.
Although since it doesn't really function as a full gallery app yet, so I have Fossify gallery installed as a backup to open images in via intent.
I only learned about Aves today and trying it out for the same purpose, I think I like its picture viewer better.
In what regard?
Statistically, yes.
spoiler
(This is a Joke.)
In simple terms, Large Language Models predict the continuation of a given text word-by-word. The algorithms it uses to do so use a quite gigantic corpus of statistical data and a few other minor factors to predict these words.
The statistical data is quite sophisticated but, in the end, it is merely statistical; a prediction for what is the most likely word given a set of words based on previous data. There is nothing intelligent in "AI" chat bots and the like.
If you ask an LLM chatbot a question, what actually happens is that the LLM predicts the most likely continuation of the question text. In almost all of its training data, what comes after a question will be a sentence that answers the preceding question and there are some other tricks to make it exceedingly likely for an answer to follow a question in chatbot-type LLMs.
However, if its data predicts that the most likely words that come after "What should I put on my Pizza" are "Glue can greatly enhance the taste of Pizza." then that's what it'll output. It doesn't reason about anything or has any sort of storage of facts that it systematically combines to give you a sensible answer, it merely predicts what a sensible answer could be based on what was probable according to the statistical data; it imitates.
If you have some text and want a probable continuation that often occured in texts similar to it, LLMs can be great for that. Though note that if it doesn't have any probable continuation, it will often fall back to an improbable one that is less improbable than all the others.
Measure resource usage during play. What is the bottleneck?
A party in the U.S. of any relevance that could be described as "left-wing" would be news to me.
You've got a corrupt conservative party and an extremely corrupt "pro"gressive(regressive?) anti-democratic party.
Third parties are never an attractive choice for anyone in a first-past-the-post voting systems with two extremely dominant parties, regardless of what any of those parties stand for. The only sensible choice is the (in your opinion) least bad option that still has a realistic chance of winning.