I think you might having better luck asking on the IPFS forum. But yes, you'll definitely need to split it up into blocks. I'm not sure if there's any standard/agreed-upon way of splitting it though.
IPFS
The InterPlanetary File System (IPFS) is a set of composable, peer-to-peer protocols for addressing, routing, and transferring content-addressed data in a decentralized file system. Many popular Web3 projects are built on IPFS - see the ecosystem directory for a list of some of these projects.
IPFS already does that, it chunks data and the final CID is the hash of the root of the tree.
The maximum size for a block in bitswap is 2MiB (minimum max size to be compliant with bitswap 1.2.0 https://github.com/ipfs/specs/pull/269)
https://github.com/synapsemedia
https://github.com/SynapseMedia/nucleus
I think was working on something similar to this for media files,
but their website is down. https://synapsemedia.io/
Well at the risk of being unhelpful ( :) ) I'd take a second to reevaluate whether I really need the huge files in the first place, or whether it would be better/possible to have the content of the file unpacked natively inside IPFS.
IPFS is just not really optimized for big binary files, and you're running into that. It has a ton of features for collecting and connecting atoms of raw content outside of files, though, and if your application involved content that could be handled natively like that you might find some of those features to be a helpful bonus.
Think of IPFS as a database, not a filesystem. Using it for huge files is akin to putting the file in the field of an SQL table. It's kind of awkward.
Anyway, I also worry about performance when people start talking about big files. That comes with A LOT of overhead. However, I have heard some people talking about getting acceptable real world performance.