this post was submitted on 28 Mar 2025
33 points (100.0% liked)

Programming

19246 readers
246 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Short version of the situation is that I have an old site I frequent for user written stories. The site is ancient (think early 2000's), and has terrible tools for sorting and searching the stories. Half of the time, stories disappear from author profiles. Thousands of stories and you can only sort by top, new, and 30-day top.

I'm in the process of programming a scraper tool so I can archive the stories and give myself a library to better find forgotten stories on the site. I'll be storing tags, dates, authors, etc, as well as the full body of the text.

Concerning the data, there are a few thousand stories- ascii only, and various data points for each story with the body of many stores reaching several pages long.

Currently, I'm using Python to compile the data and would like to know what storage solution is ideal for my situation. I have a little familiarity with SQL, json, and yaml, but not enough to know what might be best. I am also open to any other solutions that work well with Python.

top 39 comments
sorted by: hot top controversial new old
[–] Kissaki@programming.dev 4 points 2 days ago* (last edited 2 days ago) (1 children)

I would separate concerns. For the scraping, I would dump data as json onto disk. I would consider the folder structure I put them into, whether as individual files, or a JSON document per line in bigger files for grouping. If the website has good URL structure, the path could be useful for speaking author and or id identifiers in folders or files.

Storing json as text is simple. Depending on the amount, storing plain text is wasteful, and simple text compression could significantly reduce storage size. For text-only stories it's unlikely to become significant though, and not compressing makes the scraping process, and potentially validating completeness of scraped data simpler.

I would then keep this data separate from any modifications or prototyping I would do regarding modification or extension of data and presentation/interfacing.

[–] Bubs@lemm.ee 2 points 2 days ago

After reading some of the other comments, I'm definitely going to separate the systems. I'll use something like json or yaml as the output for the raw scraped data, and some sort of database for the final program.

[–] FizzyOrange@programming.dev 11 points 3 days ago (1 children)

Definitely SQLite. Easily accessible from Python, very fast, universally supported, no complicated setup, and everything is stored in a single file.

It even has a number of good GUI frontends. There's really no reason to look any further for a project like this.

[–] Bubs@lemm.ee 3 points 3 days ago (1 children)

One concern I'm seeing from other comments is that I may have more data than SQLite is ideal for. I have thousands of stories (My estimate is between 10 and 40 thousand), and many of the stories can be several pages long.

[–] FizzyOrange@programming.dev 14 points 3 days ago (1 children)

Ha no. SQLite can easily handle tens of GB of data. It's not even going to notice a few thousand text files.

The initial import process can be sped up using transactions but as it's a one-time thing and you have such a small dataset it probably doesn't matter.

[–] Bubs@lemm.ee 2 points 3 days ago

That's good to know.

[–] TehPers@beehaw.org 21 points 3 days ago (1 children)

SQL is designed for querying (it's a query language lol). If the stories are huge, you can save them to individual files and store the filepath in the database, but otherwise it can hold columns with a fair amount of data if needed.

You can probably get away with using sqlite. A more traditional database would be postgres, but it sounds like you just need the database available locally.

[–] Hoimo@ani.social 1 points 3 days ago

It's definitely possible to store the stories in columns, but there's also very little reason to do it. I think filepath in SQL and the stories in separate files in whatever format makes the most sense (html, txt, epub). If you ever want to search the stories for keywords, write a python script to build indexes in SQL, performs much better than doing LIKE on a maxed out varchar column.

I was thinking maybe Elastisearch, but I don't know how much work that is to set up. For a hobby project, writing your own indexer isn't too hard and might be more fun and easier to maintain than an industry-grade solution.

[–] HelloRoot@lemy.lol 17 points 3 days ago* (last edited 3 days ago) (2 children)

Put them into an opensearch database. It is the open source fork of elasticsearch. It has an sql plugin so you can retrieve the raw data the usual way. And there is probably also an integeation/library for it if you use any major framework/language in the backend.

But on top of it you get a very performant full text search. This might come in handy for example when you remember a sentence from a story, or if you want to find all stories with a specific character name or word for whatever reason.

[–] epyon22@programming.dev 10 points 3 days ago

Opensearch will be the most performant. Anything sql will likely start to stumble with lots of stories or really long stories where this is exactly what lucene based search engines (solr, elastic, opensearch) are designed to do. Could an SQL solution solve your problem, yes, but it may be a bit on the slow side as your amount of stories and size grows.

[–] Bubs@lemm.ee 3 points 3 days ago (1 children)

I do like the sound of that.

I'm not too worried about performance, since, once everything is running, most of the operations will only be ran every few weeks or so. Don't want it slowing to a crawl for sure though.

The text search looks promising. I've had the idea of automating "likely tags" that look for keywords (sword = fantasy while spaceship = sci-fi). It's not perfect, but it could be useful to roughly categorize all the stories that are missing tags.

[–] TehPers@beehaw.org 3 points 3 days ago* (last edited 3 days ago) (1 children)

An alternative could be to use something like postgres with the pgvector extension to do semantic searches instead of just text-based searches. You can generate embeddings for the text content of the story, then do the same for "sci-fi" or something, and see if searching that way gets you most of the way there.

Generating embeddings locally might take some time though if you don't have hardware suitable for it.

[–] Colloidal@programming.dev 3 points 3 days ago (1 children)

Is there anything Postgres doesn’t do?

[–] TehPers@beehaw.org 4 points 3 days ago

I would say run Doom, but I'm not confident in that. At the very least, Skyrim hasn't been rereleased on it yet.

[–] liliumstar@lemmy.dbzer0.com 13 points 3 days ago (1 children)

I would scrape them into individual json files with more info than you think you need, just for the sake of simplicity. Once you have them all, then you can work out an ideal storage solution, probably some kind of SQL DB. Once that is done, you could turn the json files into a .tar.zst and archive it, or just delete them if you are confident in processed representation.

Source: I completed a similar but much larger story site archive and found this to be the easiest way.

[–] Bubs@lemm.ee 3 points 3 days ago (3 children)

That's a good idea! Would yaml be alright for this too? I like the readability and Python styled syntax compared to json.

[–] canpolat@programming.dev 7 points 3 days ago (2 children)

I would stay away from YAML (almost at all costs).

[–] Bubs@lemm.ee 3 points 3 days ago (2 children)

What's your reasoning for that?

At this point, I think I'll only use yaml as the scraper output and then create a database tool to convert that into whatever data format I end up using.

[–] Kissaki@programming.dev 2 points 2 days ago* (last edited 2 days ago) (1 children)

https://ruudvanasseldonk.com/2023/01/11/the-yaml-document-from-hell

JSON is a much simpler (and consequently safer) format. It's also more universally supported.

YAML (or TOML) is decent for a manually read and written configuration. But for a scraper output for storage and follow-up workflows being through code parsing anyway, I would go for JSON.

[–] Bubs@lemm.ee 1 points 2 days ago

That's an interesting read. I'll definitely give json a try too.

[–] logging_strict@programming.dev 2 points 3 days ago (1 children)

Very wise idea. And if you want to up your game, can validate the yaml against a schema.

Check out strictyaml

The author is ahead of his time. Uses validated yaml to build stories and weave those into web sites.

Unfortunately the author also does the same with strictyaml tests. Can get frustrating cause the tests are too simple.

[–] Bubs@lemm.ee 1 points 3 days ago

Gonna be honest, I'll need to research a bit more what validating against a schema is, but I get the general idea, and I like it.

For initial testing and prototypes, I probably won't worry about validation, but once I get to the point of refining the system, validation like that would be a good idea.

[–] logging_strict@programming.dev 2 points 3 days ago* (last edited 3 days ago)

Curious to hear your reasoning as to why yaml is less desirable? Would think the opposite.

Surprised me with your strong opinion.

Maybe if you would allow, and have a few shot glasses handy, could take a stab at changing your mind.

But first list all your reservations concerning yaml

Relevent packages I wrote that rely on yaml

  • pytest-logging-strict

  • sphinx-external-toc-strict

[–] towerful@programming.dev 3 points 3 days ago (1 children)

I see no reason you can't use yaml.
Yaml and json are essentially identical for basic purposes.

Once the scraper has been confirmed working, are you going to be doing a lot of reading/editing of the raw data? It might as well be a binary blob (which is a bad idea as it couples the raw data to your specific implementation)

[–] Bubs@lemm.ee 1 points 3 days ago

I'm not entirely sure yet, but probably yes to both. The story text will likely stay unchanged, but I'll likely experiment with various ways to analyze the stories.

The main idea I want to try is assigning stories "likely tags" based on the frequency of keywords. So castle and sword could indicate fantasy while robot and ship could indicate sci-fi. There are a lot of stories missing tags, so something like this would be helpful.

[–] liliumstar@lemmy.dbzer0.com 2 points 3 days ago

Yup, I think it'd work fine, especially if you want the ability to easily inspect individual items.

Any of the popular python yaml libraries will be more than sufficient. With a bit of work, you can marshal the input (when reading files back) into python (data)classes, making it easy to work with.

[–] solrize@lemmy.world 5 points 3 days ago* (last edited 3 days ago)

Python sqlite3 module for the metadata and it has some features now for full text search that can probably handle a few thousand stories. For a bigger collection like ao3, try solr.apache.org or elastic search etc.

[–] tonytins@pawb.social 6 points 3 days ago* (last edited 3 days ago) (2 children)

Based on my experience...

  • JSON is good for storing complex but readable data.
  • SQL is used case for demanding online loads. (Not exactly ideal for serialization)
  • YAML is a strange beast, but I've mostly seen it used for configuration.
[–] Ephera@lemmy.ml 6 points 3 days ago (1 children)

Yeah, don't use YAML for storing serialized data. It's intended to be hand-written.

[–] Bubs@lemm.ee 3 points 3 days ago (1 children)

Did not know that. I'll keep that in mind.

[–] bkhl@social.sdfeu.org 4 points 3 days ago

@Bubs @Ephera YAML is also a superset of JSON, so if you generate JSON tools that handle YAML will be able to use it still.

[–] Bubs@lemm.ee 1 points 3 days ago

Don't know the limits of Yaml, especially for large chunks of data, but I do like its easy readability and similarity to Python. I'll probably try out a bit of yaml as well as some of the other recommendations other have given me.

[–] bazzzzzzz@lemm.ee 4 points 3 days ago* (last edited 3 days ago)

If scraping is reliable, I'd use the classic python pickle or JSON.dump

For a few thousand I would just use a sqlite dB...

3 tables:

  • Story with fields: Id, title, text
  • Meta with fields: Id, story-id, subject, contents
  • Tags with fields Id, story-id, tag

Use SQL joins for sorting etc.

Sqlite is easily converted to other formats if you decide to use more complex solutions.

[–] fakeplastic@lemmy.dbzer0.com 3 points 3 days ago (1 children)

Since you want to be able to access these stories as well as store them, you can kill two birds with one stone by creating a Django app with a SQLite backend. The builtin admin site will let you browse and search the content without having to write much code.

[–] Bubs@lemm.ee 1 points 3 days ago (1 children)

Is this something that can be run locally without a server? I'm aiming for something as simple as opening the notes app on your phone and selecting a story.

[–] fakeplastic@lemmy.dbzer0.com 2 points 3 days ago (1 children)

Yes, you can run the web server locally and access it in your browser like any other site. You just wouldn't be able to access it from outside your home network.

[–] Bubs@lemm.ee 1 points 3 days ago

Gotcha. I think I'm aiming for something that runs off a single program. I want to be able to start it up whenever or even transfer it to a drive and use it on something like my laptop. Your idea sounds like it may work, but I'll have to give it a deeper look.

[–] MagicShel@lemmy.zip 2 points 3 days ago (1 children)

I do a lot of web services and I'm a big fan of SQL, but I wouldn't use a SQL database for this myself. Something like MongoDb or Cassandra would probably serve you better (depending on whether you prefer a REST interface to your data or something more conventional). You've got a very flat structure except for tags.

Tags are the one feature that might make me choose SQL due to the many: many relationship.

I'm not sure what role you think YAML would play. You could store each story as YAML, but then you'd have to parse basically everything to filter and sort. The story should just be a massive text field, and the metadata goes into respective fields. Tags might be comma delimited or in SQL you could normalize it so that you have three tables: stories, tags, and a table that basically looks like

StoryId TagId

I'd at least first try to use a non-relational database structure because filtering and sorting by tag might still be fast enough. If it's too slow then you could go SQL, but I'd aim for the less complex solution.

[–] Bubs@lemm.ee 2 points 3 days ago

A few keywords in there I'll have to look up, but I get the majority of it.

Yeah, I'm not too sure yet how complex the tags will be in the end. They are basically genres at the start, but I may make them more complex as I go.

After reading some of the other comments, I doubt I'll use yaml as the main storage method. I do like the idea of using yaml for the scraper output though. Would give me a nice way to organize the data elements for each story in a way that can be easily read when needed.