tal

joined 2 years ago
[–] tal@lemmy.today 1 points 36 minutes ago* (last edited 36 minutes ago)

It's far more common in the US now, but I remember being really pleased when I discovered Dot's Pretzels when they were pretty uncommon. The powder on them is messy but delicious.

investigates

Ah. It sounds like Hershey bought them and built up greatly-scaled-up production elsewhere. Used to be a little company, and I had to mail-order them. That'd explain why they're everywhere now.

https://en.wikipedia.org/wiki/Dot%27s_Pretzels

The brand was founded by Dorothy Henke who began making them at her winter residence in Goodyear, Arizona, and then later in a factory at her hometown in Velva, North Dakota.[2] In 2012, she appeared at Pride of Dakota.[3]

The company was acquired for $1.2 billion in 2021 by Hershey[4] and was added to Hershey's existing portfolio of snack foods which included Pirate's Booty and SkinnyPop.[5][6]

In 2023, the main factory in Velva was closed and the employees were laid off.[7] Todd Scott, of Hershey, said in an interview “due to the physical limitations of the building and cost associated with the Velva facility, it has led us to the hard decision to cease operations and close the facility. Our goal is to ensure everyone is supported during this period of change.” They offered relocation plans to the employees.[8]

[–] tal@lemmy.today 2 points 2 hours ago* (last edited 1 hour ago)

So, first, it's trivial to make a wiki that aims to be an encyclopedia with some other viewpoint. Conservapedia is an (in)famous example.

The problem is that scale is very important to Wikipedia's utility. It's not the existence of the thing, but enough people who want to put useful information in it to make the thing valuable. If what you want is something comparable in utility to Wikipedia, that's going to be a lot harder. You're going to have to line up a lot of people who specifically want to write for that wiki, unless you can figure out some way to generate the thing outside of using human writers.

Second, I'd say that it's hard to define Wikipedia as specifically American by many metrics that I'd consider important


I mean, content comes from people all over. My guess is that the great majority of content in, say, Georgian language Wikipedia is very probably not written by Americans. Might be that most English-language content is, though. shrugs

Wikipedia's content is under a Creative Commons license, as I recall, so anyone can fork it, if you just want to host your own; the Wikimedia people put up the content in compressed form periodically. The MediaWiki software is open source, and you go go run your own instance of the stuff. I've seen various wikis that have basically just copied Wikipedia content and run it on their own MediaWiki instance.

https://en.wikipedia.org/wiki/List_of_content_forks_of_Wikipedia

https://en.wikipedia.org/wiki/List_of_online_encyclopedias

I'm going to be honest with you, though


I think that it's going to be very hard to produce something that is really competitive with Wikipedia at an international level unless:

  • You're a state that just bans Wikipedia period and have major scale and maybe a predominant number of users in the language in question. Wikipedia says that China has blocked Wikipedia since April 23, 2019, for example.

    https://en.wikipedia.org/wiki/List_of_websites_blocked_in_mainland_China

    https://en.wikipedia.org/wiki/Baidu_Baike

    Baidu Baike (/ˈbaɪduː ˈbaɪkə/; Chinese: 百度百科; pinyin: Bǎidù Bǎikē; lit. 'Baidu Encyclopedia', also known as BaiduWiki internationally[1]) is a semi-regulated Chinese-language collaborative online encyclopedia owned by the Chinese technology company Baidu.[2] Modelled after Wikipedia, it was launched in April 2006.[3] As of 2025, it claims more than 30 million entries[4] and around 8.03 million editors [5]— the largest number of entries of any Chinese-language online encyclopedia.[6] Baidu Baike has been criticised for its censorship, copyright violations, commercialist practices and unsourced or inaccurate information.[7][8][9][10]

  • You are going to do basically the same thing, but with a coalition of states. I'm skeptical that there are a lot of coalitions that have similar language and similar content concerns, but...oh, for example, there are a number of Muslim states who don't like their citizens having access to LGBT stuff. That's come up on here, where users of some Threadiverse instances


e.g. the transexual-oriented lemmy.blahaj.zone


are blocked from those countries. Maybe someone could get many states to do something like a "Muslim-acceptable Standard Arabic encyclopedia" or something, and block competitors. A big problem here is that I suspect that a lot of those states also have problems with narratives that other countries have. For example, Morocco and Algeria probably are not going to be happy about articles relating to the Western Sahara. Maybe you could make an encyclopedia that specifically facilitates political censorship on particular topics, like "this article has been flagged as one where there is a Morocco and Algeria version, and you can only see your version in your country". That wouldn't be very appealing to me, but I could imagine making something like that work.

  • You do one of the above two options, but instead of an alternative to Wikipedia, you maintain an actively-merged fork that keeps merging from upstream Wikipedia. Like, say you're fine with Wikipedia in general, don't have a problem with, say, policy or citing or whatever, but you are super-upset about content relating to a relatively-small portion of the wiki. I think that this is true of very many people who don't like Wikipedia for one reason or another. Like, they don't care about, say, Wikipedia's article on furniture, but they really get upset about articles that relate to religion or politics or whatever in some area where they don't agree. So, you write software that is set up to maintain an "active fork". Like, each page has something like a patch to yank out content that you don't like, which gets re-applied whenever the Wikipedia version of the page is updated. This sort of thing is not uncommon in software development, working with source code rather than human language text. If a merge fails on a new version of a page, then you just keep the old version of the page until a human can go update the patch, which is an option that isn't really available with software development. Some of the pages will get out of date, and there's going to be an ongoing labor cost, and you always are going to have some amount of content that you don't like leaking in, but it might be a lot less labor than doing a new encyclopedia.

  • You use a radically-technically-different approach. Elon Musk, for example, has gone for an "alternative source of truth generated by an AI" with Grokipedia. I think that making that work is going to require a lot more technical work, but maybe down the line, if Musk can make it work, other states and institutions will also create their own alternative sources of truth generated by AIs.

    https://en.wikipedia.org/wiki/Grokipedia

    Grokipedia is an AI-generated online encyclopedia operated by the American company xAI. The site was launched on October 27, 2025. Some entries are generated by Grok, a large language model owned by the same company, while others were forked from Wikipedia, with some altered and some copied nearly verbatim. Articles cannot be directly edited, though logged-in visitors to the encyclopedia can suggest corrections via a pop-up form, which are reviewed by Grok.

    xAI founder Elon Musk positioned Grokipedia as an alternative to Wikipedia that would "purge out the propaganda" he believes is promoted by the latter,[1] with Musk describing Wikipedia as "woke" and an "extension of legacy media propaganda".[2]

    My own personal suspicion is that the state of AI is not really sufficient to do a good job of this in early 2026. But I also suspect that it will eventually be


there are obviously people and institutions who want to have alternate sources of truth, either for themselves or because they don't want other people exposed to Wikipedia for whatever reason, and AI might be one way of doing mass generation of content while baking in whatever political or ideological views one wants via use of software.

[–] tal@lemmy.today 5 points 3 hours ago

And here’s a thing about me. I want to trust new websites. I have a bias towards clicking on articles from sites I don’t know, because to be quite honest, I’ve read the TCRF page on Phantasy Star a thousand times. How else do you learn something new?

To some extent, I think that this is a solveable problem in terms of just weighting domain age and reputation more highly in search engines (and maybe in LLM training stuff).

The problem is that then you wind up with a situation where it's hard for new media sources to compete with established incumbents, because the incumbents have all that reputation and new entrants have to build theirs, and new entrants get deprioritized by search engines.

I think that maybe there's an argument that you could also provide a couple of user-configurable parameters on search engines to permit not deprioritizing newer sites and the like.

Another issue is that reputation can be bought and sold. This is not new. For example, you can buy a reputatable, established news source and then change its content to be less reputable but promote a message that you want. That will, over time, burn its credibility, but as long as the return you get is worth what you've spent...shrugs

[–] tal@lemmy.today 1 points 3 hours ago

Gotcha. I don't use non-Threadiverse stuff myself, but if you do use communities like that, you might also be interested in retrolemmy.com:

https://retrolemmy.com/communities?listingType=Local&sort=TopMonth&page=1

[–] tal@lemmy.today 1 points 3 hours ago (2 children)

I'm assuming that you aren't wanting Threadiverse-based stuff like !retrogaming@lemmy.world?

[–] tal@lemmy.today 4 points 4 hours ago* (last edited 1 hour ago) (4 children)

this is how you get more olympians.

If enough people are in the market, have egg or sperm donor companies call people who medal.

considers

Looking down the road, because my expectation is that sooner or later, we're going to be doing human genetic engineering, a company getting Olympian genetic material like that might be


as long as they can operate in a legal jurisdiction that doesn't prohibit human genetic engineering


better off just calling up medalists and licensing their DNA. I don't think that you can copyright DNA under current US case law, though it might be patentable.

investigates

https://en.wikipedia.org/wiki/Copyright_status_of_genetic_sequences

As of 2016, genetic sequences were not recognized as copyrightable subject matter by any jurisdiction.[3] The United States Copyright Office's position is that "DNA sequences and other genetic, biological, or chemical substances or compounds, regardless of whether they are man-made or produced by nature," are ideas, systems, or discoveries rather than copyrightable works of authorship.[15]: 23 

You might not need to copyright or patent it, though, if you can just keep the changes you make secret. I mean, you get sperm/egg from Random Person, you do your proprietary modifications, you generate an embryo, you implant. I'm not sure how hard it would be for some other company to reverse-engineer the changes by looking at people's DNA relative to background noise in the DNA.

searches

https://pubmed.ncbi.nlm.nih.gov/33095042/

A large majority of countries (96 out of 106) surveyed have policy documents-legislation, regulations, guidelines, codes, and international treaties-relevant to the use of genome editing to modify early-stage human embryos, gametes, or their precursor cells. Most of these 96 countries do not have policies that specifically address the use of genetically modified in vitro embryos in laboratory research (germline genome editing); of those that do, 23 prohibit this research and 11 explicitly permit it. Seventy-five of the 96 countries prohibit the use of genetically modified in vitro embryos to initiate a pregnancy (heritable genome editing). Five of these 75 countries provide exceptions to their prohibitions. No country explicitly permits heritable human genome editing.

The thing is that in practice, if you want in vitro implantation, you can probably just travel abroad to a jurisdiction that doesn't prohibit it, unless countries assert extraterritorial jurisdiction that attaches to their citizens. If someone wants an Olympianized kid, I imagine that traveling abroad isn't that much additional barrier. Extraterrorial jurisdiction exists, but it is very rare; prohibitions on child sex tourism are one notable example that a number of countries do.

https://en.wikipedia.org/wiki/Extraterritorial_jurisdiction

EDIT: Replaced the text and citation for the legal overview, as it looks like the earlier link was to a spam site that copied it.

[–] tal@lemmy.today 6 points 4 hours ago* (last edited 4 hours ago) (1 children)

Actually, thinking about this...a more-promising approach might be deterrent via poisoning the information source. Not bulletproof, but that might have some potential.

So, the idea here is that what you'd do there is to create a webpage that looks, to a human, as if only the desired information shows up.

But you include false information as well. Not just an insignificant difference, as with a canary trap, or a real error intended to have minimal impact, only to identify an information source, as with a trap street. But outright wrong information, stuff where reliance on the stuff would potentially be really damaging to people relying on the information.

You stuff that information into the page in a way that a human wouldn't readily see. Maybe you cover that text up with an overlay or something. That's not ideal, and someone browsing using, say, a text-mode browser like lynx might see the poison, but you could probably make that work for most users. That has some nice characteristics:

  • You don't have to deal with the question of whether the information rises to the level of copyright infringement or not. It's still gonna dick up responses being issued by the LLM.

  • Legal enforcement, which is especially difficult across international borders


The Pirate Bay continues to operate to this day, for example


doesn't come up as an issue. You're deterring via a different route.

  • The Internet Archive can still archive the pages.

Someone could make a bot that post-processes your page to strip out the poison, but you could sporadically change up your approach, change it over time, and the question for an AI company is whether it's easier and safer to just license your content and avoid the risk of poison, or to risk poisoned content slipping into their model whenever a media company adopts a new approach.

I think the real question is whether someone could reliably make a mechanism that's a general defeat for that. For example, most AI companies probably are just using raw text today for efficiency, but for specifically news sources known to do this, one could generate a screenshot of a page in a browser and then OCR the text. The media company could maybe still take advantage of ways in which generalist OCR and human vision differ


like, maybe humans can't see text that's 1% gray on a black background, but OCR software sees it just fine, so that'd be a place to insert poison. Or maybe the page displays poisoned information for a fraction of a second, long enough to be screenshotted by a bot, and then it vanishes before a human would have time to read it.

shrugs

I imagine that there are probably already companies working on the problem, on both sides.

[–] tal@lemmy.today 8 points 5 hours ago* (last edited 4 hours ago) (4 children)

I'm very far from sure that this is an effective way to block AI crawlers from pulling stories for training, if that's their actual concern. Like...the rate of new stories just isn't that high. This isn't, say, Reddit, where someone trying to crawl the thing at least has to generate some abnormal traffic. Yeah, okay, maybe a human wouldn't read all stories, but I bet that many read a high proportion of what the media source puts out, so a bot crawling all articles isn't far off looking like a human. All a bot operator need do is create a handful of paid accounts and then just pull partial content with each, and I think that a bot would just fade into the noise. And my guess is that it is very likely that AI training companies will do that or something similar if knowledge of current news events is of interest to people.

You could use a canary trap, and that might be more-effective:

https://en.wikipedia.org/wiki/Canary_trap

A canary trap is a method for exposing an information leak by giving different versions of a sensitive document to each of several suspects and seeing which version gets leaked. It could be one false statement, to see whether sensitive information gets out to other people as well. Special attention is paid to the quality of the prose of the unique language, in the hopes that the suspect will repeat it verbatim in the leak, thereby identifying the version of the document.

The term was coined by Tom Clancy in his novel Patriot Games,[1][non-primary source needed] although Clancy did not invent the technique. The actual method (usually referred to as a barium meal test in espionage circles) has been used by intelligence agencies for many years. The fictional character Jack Ryan describes the technique he devised for identifying the sources of leaked classified documents:

Each summary paragraph has six different versions, and the mixture of those paragraphs is unique to each numbered copy of the paper. There are over a thousand possible permutations, but only ninety-six numbered copies of the actual document. The reason the summary paragraphs are so lurid is to entice a reporter to quote them verbatim in the public media. If he quotes something from two or three of those paragraphs, we know which copy he saw and, therefore, who leaked it.

There, you generate slightly different versions of articles for different people. Say that you have 100 million subscribers. ln(100000000)/ln(2)=26.57... So you're talking about 27 bits of information that need to go into the article to uniquely describe each. The AI is going to be lossy, I imagine, but you can potentially manage to produce 27 unique bits of information per article that can reasonably-reliably be remembered by an AI after training. That's 27 different memorable items that need to show up in either Form A or Form B. Then you search to see what a new LLM knows about and ban the bot identified.

Cartographers have done that, introduced minor, intentional errors to see what errors maps used to see whether they were derived from their map.

https://en.wikipedia.org/wiki/Trap_street

In cartography, a trap street is a fictitious entry in the form of a misrepresented street on a map, often outside the area the map nominally covers, for the purpose of "trapping" potential plagiarists of the map who, if caught, would be unable to explain the inclusion of the "trap street" on their map as innocent. On maps that are not of streets, other "trap" features (such as nonexistent towns, or mountains with the wrong elevations) may be inserted or altered for the same purpose.[1]

https://en.wikipedia.org/wiki/Phantom_island

A phantom island is a purported island which has appeared on maps but was later found not to exist. They usually originate from the reports of early sailors exploring new regions, and are commonly the result of navigational errors, mistaken observations, unverified misinformation, or deliberate fabrication. Some have remained on maps for centuries before being "un-discovered".

In some cases, cartographers intentionally include invented geographic features in their maps, either for fraudulent purposes or to catch plagiarists.[5][6]

That has weaknesses. It's possible to defeat that by requesting multiple versions using different bot accounts and identifying divergences and maybe merging them. In the counterintelligence situation, where canary traps have been used, normally people only have access to one source, and it'd be hard for an opposing intelligence agency to get access to multiple sources, but it's not hard here.

And even if you ban an account, it's trivial to just create a new one, decoupled from the old one. Thus, there isn't much that a media company can realistically do about it, as long as the generated material doesn't rise to the level of a derived work and thus copyright infringement (and this is in the legal sense of derived


simply training something on something else isn't sufficient to make it a derived work from a copyright law standpoint, any more than you reading a news report and then talking to someone else about it is).

Getting back to the citation issue...

Some news companies do keep archives (and often selling access to archives is a premium service), so for some, that might cover some of the "inability to cite" problem that not having Internet Archive archives produces, as long as the company doesn't go under. It doesn't help with a problem that many news companies have a tendency to silently modify articles without reliably listing errata, and that having an Internet Archive copy can be helpful. There are also some issues that I haven't yet seen become widespread but worried about, like where a news source might provide different articles to people in different regions; there, having a trusted source like the Internet Archive can avoid that, and that could become a problem.

[–] tal@lemmy.today 2 points 1 day ago* (last edited 17 hours ago) (3 children)

Yeah, that's something that I've wondered about myself, what the long run is. Not principally "can we make an AI that is more-appealing than humans", though I suppose that that's a specific case, but...we're only going to make more-compelling forms of entertainment, better video games. Recreational drugs aren't going to become less addictive. If we get better at defeating the reward mechanisms in our brain that evolved to drive us towards advantageous activities...

https://en.wikipedia.org/wiki/Wirehead_(science_fiction)

In science fiction, wireheading is a term associated with fictional or futuristic applications[1] of brain stimulation reward, the act of directly triggering the brain's reward center by electrical stimulation of an inserted wire, for the purpose of 'short-circuiting' the brain's normal reward process and artificially inducing pleasure. Scientists have successfully performed brain stimulation reward on rats (1950s)[2] and humans (1960s). This stimulation does not appear to lead to tolerance or satiation in the way that sex or drugs do.[3] The term is sometimes associated with science fiction writer Larry Niven, who coined the term in his 1969 novella Death by Ecstasy[4] (Known Space series).[5][6] In the philosophy of artificial intelligence, the term is used to refer to AI systems that hack their own reward channel.[3]

More broadly, the term can also refer to various kinds of interaction between human beings and technology.[1]

Wireheading, like other forms of brain alteration, is often treated as dystopian in science fiction literature.[6]

In Larry Niven's Known Space stories, a "wirehead" is someone who has been fitted with an electronic brain implant known as a "droud" in order to stimulate the pleasure centers of their brain. Wireheading is the most addictive habit known (Louis Wu is the only given example of a recovered addict), and wireheads usually die from neglecting their basic needs in favour of the ceaseless pleasure. Wireheading is so powerful and easy that it becomes an evolutionary pressure, selecting against that portion of humanity without self-control.

Now, of course, you'd expect that to be a powerful evolutionary selector, sure


if only people who are predisposed to avoid such things pass on offspring, that'd tend to rapidly increase the percentage of people predisposed to do so


but the flip side is the question of whether evolutionary pressure on the timescale of human generations can keep up with our technological advancement, which happens very quickly.

There's some kind of dark comic that I saw


I thought that it might be Saturday Morning Breakfast Cereal, but I've never been able to find it again, so maybe it was something else


which was a wordless comic that portrayed a society becoming so technologically advanced that it basically consumes itself, defeats its own essential internal mechanisms. IIRC it showed something like a society becoming a ring that was just stimulating itself until it disappeared.

It's a possible answer to the Fermi paradox:

https://en.wikipedia.org/wiki/Fermi_paradox#It_is_the_nature_of_intelligent_life_to_destroy_itself

The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence.[1][2][3]

The paradox is named after physicist Enrico Fermi, who informally posed the question—remembered by Emil Konopinski as "But where is everybody?"—during a 1950 conversation at Los Alamos with colleagues Konopinski, Edward Teller, and Herbert York.

Evolutionary explanations

It is the nature of intelligent life to destroy itself

This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration.[98] Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient,[99] are many,[100] including war, accidental environmental contamination or damage, the development of biotechnology,[101] synthetic life like mirror life,[102] resource depletion, climate change,[103] or artificial intelligence. This general theme is explored both in fiction and in scientific hypotheses.[104]

[–] tal@lemmy.today 10 points 1 day ago* (last edited 6 hours ago) (5 children)

Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users.

I am confident that one way or another, the market will meet demand if it exists, and I think that there is clearly demand for it. It may or may not be OpenAI, it may take a year or two or three for the memory market to stabilize, but if enough people want to basically have interactive erotic literature, it's going to be available. Maybe someone else will take a model and provide it as a service, train it up on appropriate literature. Maybe people will run models themselves on local hardware


in 2026, that still requires some technical aptitude, but making a simpler-to-deploy software package or even distributing it as an all-in-one hardware package is very much doable.

I'll also predict that what males and females generally want in such a model probably differs, and that there will probably be services that specialize in that, much as how there are companies that make soap operas and romance novels that focus on women, which tend to differ from the counterparts that focus on men.

I also think that there are still some challenges that remain in early 2026. For one, current LLMs still have a comparatively-constrained context window. Either their mutable memory needs to exist in a different form, or automated RAG needs to be better, or the hardware or software needs to be able to handle larger contexts.

[–] tal@lemmy.today 4 points 1 day ago

I don't know about stopping it if someone is sufficiently determined to get in, but if it's a repeated problem, I suppose that you could put something that looks interesting to steal in the car with an AirTag-type tracking device or similar hidden in it and then provide the police with the thief's track if they bite.

Putting visible cameras all over might deter some people.

I'd guess that parking in a garage would help, but you say elsewhere that that wasn't an option here.

[–] tal@lemmy.today 4 points 3 days ago* (last edited 3 days ago)

I want to clean my PC thoroughly to buy it a few more years.

Is it not working in its present state?

If it's working all right, I'd just leave it be, and if you don't want tar buildup in your next case, get a case that has an air filter on it that you can replace, or run an air purifier with a filter in the room.

 

Starlink updated its Global Privacy Policy on January 15, according to the Starlink website. The policy includes new details stating that unless a user opts out, Starlink data may be used “to train our machine learning or artificial intelligence models” and could be shared with the company’s service providers and “third-party collaborators,” without providing further details.

 

cross-posted from: https://beehaw.org/post/24313827

Seriously, what the fuck is going on with fabs right now?

Micron has found a way to add new DRAM manufacturing capacity in a hurry by acquiring a chipmaking campus from Taiwanese outfit Powerchip Semiconductor Manufacturing Corporation (PSMC).

The two companies announced the deal last weekend. Micron’s version of events says it’s signed a letter of intent to acquire Powerchip’s entire P5 site in Tongluo, Taiwan, for total cash consideration of US$1.8 billion.

140
submitted 1 month ago* (last edited 1 month ago) by tal@lemmy.today to c/technology@lemmy.world
 

I think that it's interesting to look back at calls that were wrong to try to help improve future ones.

Maybe it was a tech company that you thought wouldn't make it and did well or vice versa. Maybe a technology you thought had promise and didn't pan out. Maybe a project that you thought would become the future but didn't or one that you thought was going to be the next big thing and went under.

Four from me:

  • My first experience with the World Wide Web was on an rather unstable version of lynx on a terminal. I was pretty unimpressed. Compared to gopher clients of the time, it was harder to read, the VAX/VMS build I was using crashed frequently, and was harder to navigate around. I wasn't convinced that it was going to go anywhere. The Web has obviously done rather well since then.

  • In the late 1990s, Apple was in a pretty dire state, and a number of people, including myself, didn't think that they likely had much of a future. Apple turned things around and became the largest company in the world by market capitalization for some time, and remains quite healthy.

  • When I first ran into it, I was skeptical that Wikipedia would manage to stave off spam and parties with an agenda sufficiently to remain useful as it became larger. I think that it's safe to say that Wikipedia has been a great success.

  • After YouTube throttled per-stream download speeds, rendering youtube-dl much less useful, the yt-dlp project came to the fore, which worked around this with parallel downloads. I thought that it was very likely that YouTube wouldn't tolerate this


it seems to me to have all the drawbacks of youtube-dl from their standpoint, plus maybe more, and shouldn't be too hard to detect. But at least so far, they haven't throttled or blocked it.

Anyone else have some of their own that they'd like to share?

 

I'm not sure whether this is an Mbin or Lemmy bug, but it looks like there's some sort of breakage involving their interaction.

A user on an Mbin home instance (fedia.io) submitted a post to a community on a Lemmy instance (beehaw.org).

https://beehaw.org/post/23981271

When viewed via the Web UI on Lemmy instances (at least all the ones, I tried, lemmy.today, lemmy.ml, and beehaw.org), as well as at least Eternity on lemmy.today this post is a link to an image, possibly proxied via pict-rs if the instance does such proxying:

https://fedia.io/media/93/77/937761715da35c5c9fb1267e65b4ea54c2b649c2eebbf8ce26d2b4cba20097bf.jpg

https://beehaw.org/post/23981271

https://lemmy.ml/post/41016280

https://lemmy.today/post/44629301

It contains no link to the URL that the submitter intended to link to.

When viewed via the PiedFed Web UI (checking using olio.cafe) or, based on what I believe to be the case from other responses, the Mbin Web UI, the post apparently links to the intended URL in a link beneath the title:

https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-could-prioritize-sponsored-content-as-part-of-ad-strategy-sponsored-content-could-allegedly-be-given-preferential-treatment-in-llms-responses-openai-to-use-chat-data-to-deliver-highly-personalized-results

https://olio.cafe/c/technology/p/78253/chatgpt-could-prioritize-sponsored-content-as-part-of-ad-strategy-sponsored-content-could-a

Just wanted to make the devs aware of the interaction.

view more: next ›