this post was submitted on 30 Sep 2025
66 points (90.2% liked)

Linux

9582 readers
361 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
top 31 comments
sorted by: hot top controversial new old
[–] Lazycog@sopuli.xyz 77 points 1 day ago

From their release notes:

[...] Note that this feature is completely optional and no AI related code is even loaded until you configure an AI provider.

I'm glad for that part.

[–] Harvey656@lemmy.world 5 points 23 hours ago (1 children)

I... kinda don't care about this. It does nothing if not enabled and its super barebones as far as I can tell.

[–] VitoRobles@lemmy.today 1 points 20 hours ago

This is why I hate how everything is AI now.

Generative AI is absolute trash.

A fancy suped up search engine using AI to check files and answer questions about the files, whatever.

[–] A_norny_mousse@feddit.org 9 points 1 day ago (1 children)

What would it be doing if I enabled it?

And can it be configured to use local(ly installed) AI?

[–] Successful_Try543@feddit.org 11 points 1 day ago (1 children)

Probably comprehend sections of the text.

Regarding your 2nd question: Yes, you can e.g. use a local Ollama instance. https://news.itsfoss.com/content/images/2025/09/calibre-ai-integration.png

[–] sukhmel@programming.dev 5 points 1 day ago

I thought it would be useful for filling and finding metadata, but I don't think it can do that

[–] Maiq@piefed.social 21 points 1 day ago (2 children)

Why the fuck would i want that?

[–] stray@pawb.social 5 points 20 hours ago

Users can highlight any text and ask AI models questions about it, receiving explanations, context, or summaries on the spot.

I can see this being particularly useful for autistic people who don't understand a very poetic section, or for people reading a text which is not their first language.

[–] faintwhenfree@lemmus.org 39 points 1 day ago

Then don't enable it, I hate the AI but there are people who want it and calibre implemented it the right way. Its opt in, you have to configure AI provider first before that no code related to ai is loaded. That's best way to implement AI.

[–] SW42@lemmy.world 17 points 1 day ago (1 children)

I don’t really see a use case for it on calibre. Maybe others are more creative than I am and can tell me what they would use it for.

As far as I see it on the release notes it is completely optional and nothing gets loaded until the Provider is defined and activated so it’s peachy for me.

[–] unknowing8343@discuss.tchncs.de 8 points 1 day ago (4 children)

Well, have you ever read something and went "what the hell is this?", "I don't get this", "what is an abubemaneton?". Well, now you'll be able to quickly ask AI about it.

[–] msage@programming.dev 6 points 23 hours ago (1 children)

I would always prefer a dictionary or a Wikipedia over a fucking LLM.

[–] stray@pawb.social 3 points 20 hours ago (1 children)

But you can't always use a dictionary to understand the use of words or phrases in novel contexts. For instance, maybe a phrase contains meaning not in its literal text, but in its resemblance to a quotation from another work, or to a previous quotation within the same work.

I've taken a classic Swedish poem ("Bron" by Erik Lindorm), and I can't seem to find an explanation of what it means via traditional searching. But when I paste it into ChatGPT and ask what it means, it gives a detailed interpretation.

A human could do the same, but it's unreasonable to expect every learner has a human on standby to cater to their every educational whim at all hours of the day.

[–] msage@programming.dev 0 points 19 hours ago (1 children)

LLMs will tell you anything, and if you can't fact check it, you will never know if it's true or not.

If you're fine with believing in random words put together nicely, I can't stop you, but I won't cheer you on, either.

LLMentalist

[–] stray@pawb.social 1 points 1 hour ago (1 children)

Your own reference explains how the AI responses are more than just random words put together nicely, and I can and have fact-checked it. It's not a trustworthy tool for many things, but it is useful for language-based pursuits, because language is precisely what it's designed to work with.

For example, I've recently been watching a Chinese period drama and asked ChatGPT what various words were by transcribing how I heard them and explaining the context. It gave me accurate hanzi, pinyin, and definitions as confirmed by dictionary sites. It's been a very valuable tool to me for language learning.

[–] msage@programming.dev 1 points 9 minutes ago

It's a token-based model. Not exactly language.

I'm happy that you feel like it's helping you.

But never trust it.

[–] ulterno@programming.dev 5 points 1 day ago (2 children)

I remember having an Oxford Dictionary CD as a child (got it with the physical copy).
Unfortunately, it stopped working long ago (and I didn't rip it), but while it did work, I had quite a lot of fun reading up on word-origins, synonyms/antonyms, pronunciations and whatnot.

I'd honestly rather be able to connect something like that to Calibre (and other programs) with DBus, rather than use AI for a definition. And that was just a single CD (I can be sure, because I didn't have a DVD reader).

So, perhaps some other use case?

[–] unknowing8343@discuss.tchncs.de 3 points 10 hours ago (1 children)

AI is not only capable of definitions. In fact... You wouldn't use it for that. But It's terribly good at context. So it can interpret a whole phrase, or paragraph. Maybe calibre even passes the book metadata so it can infer characters, places and broader context.

[–] ulterno@programming.dev 0 points 57 minutes ago

Yeah, that won't really be doable just by an extended dictionary.
I myself tend to use Google sometimes, to look for stuff like "one word for the phrase ..." and most of the times the AI is the one giving the answer.

[–] Auster@thebrainbin.org 5 points 22 hours ago

QuickDic's default databases are compiled from Wiktionary entries, and Wiktionary seems like the most reliable part of Wikipedia currently. Wonder then if that couldn't be used also. On QuickDic, having all databases installed takes a bit over 1 GB, not much for desktop standards afaik.

[–] athatet@lemmy.zip 4 points 23 hours ago

And then the AI can give you some made up and incorrect answer. Hooray!

[–] blarghly@lemmy.world 2 points 22 hours ago (1 children)

Sure, I guess, but I could just as easily ask chatgpt in the chatgpt app, I feel.

[–] unknowing8343@discuss.tchncs.de 2 points 10 hours ago

This just makes it faster, and convenient, you don't have to get out of your book.

[–] tanisnikana@lemmy.world 12 points 1 day ago

AI slop has no place in books.

[–] vrighter@discuss.tchncs.de 7 points 1 day ago

fuck that shit!

[–] Euphoma@lemmy.ml 2 points 1 day ago (1 children)

So if it only lets the llm see highlighted text, whats the point of even adding it into calibre. It takes 0 extra seconds to paste that text into google or chatgpt or duck.ai or whatever

[–] unknowing8343@discuss.tchncs.de 6 points 1 day ago (1 children)

You know it doesn't take 0 seconds. Record yourself. If you are studying or something (and need AI comments somehow), it's a good feature.

[–] calliope@retrolemmy.com 4 points 1 day ago* (last edited 1 day ago)

There’s also a small mental cost to switching contexts, from reader to browser.

This is actually one of the better uses of AI I’ve seen because it is literally asking a large language model about language.

I probably still won’t use it, but at least it makes sense!

[–] Unattributed@feddit.online -1 points 1 day ago

Just what nobody wanted in their eReading software.