Firstly thank you for reading/responding.
15yr+ Enterprise developer here.
I have an idea for a product offering (and a POC almost done) that i think Alchemy/Infura might be missing, but i am totally ready to have my bubble popped here. There might be a great reason no one has done this before, so im ready to hear that.
Just incase there is not a good reason: i think that I might see an issue with Alchemy's pub/sub that makes it kindof sub-par for a few important use-cases on the blockchain - especially when enterprise adoption is concerned. Here are my concerns:
- eth_subscribe is not fault-tolerant - in that there is no stateful buffer for your event. It's fire-and-forget over the websocket and if you're not there, you miss it.
- eth-newfilter + eth_getFilterChanges seems like a rad solution, but it has never worked for me thru alchemy. After about 30seconds the event filter deletes itself regardless of how often i call eth_getFilterChanges.
Either way - this seems to slightly miss the point of wanting a reliable queue as it relies on polling (likely 2 levels of polling - once internally by alchemy to gather logs between block-spans, and once from the "subscriber" to pull the logs over the wire)
I feel like there may be a product offering hidden in my learnings of using these systems at scale.
For example: I cant see a great way (and i might just be wrong, so please point me to solutions if you know of them) to use these Alchemy pub-sub systems to efficiently do these things at scale:
- Give me all board-ape transfers since bored-apes were created, then keep watching forever.
- Give me all transfers to the 0x00 address for contract X between block Y and Z.
- Give me all the addresses which have EVER owned a bored ape regardless of current balance.
Im not promoting because its still just an idea, but I have a POC of a website that a subscriber could integrate with using like 5 lines of JS code. You could subscribe with at-least-once delivery of blockchain topic filters.
For those curious: the stack is NextJS+cognito+K8s+pg+rabbitMq
But having rabbitMq means that if your app goes down you will still get your events with a guarantee. It also means that :
- you (the subscriber) can parallelize the HECK out of processing of these logs.
- we (the service gathering the logs) can also parallelize the query/aggregation of the events.
- we (the service gathering the logs) can pull events from "latest" blocks while watching for you and re-queue them if they are involved in an re-org. That means you can safely operate closer to the head of the blockchain and possibly get events sooner than other systems could safely allow.
The question i have for this sub is...Is this a complete waste of time, or would someone find value in this?
I think there likely is some value here. I ended up building around this issue in house. I do things a bit different.. actually poll full logs for block ranges (dynamically based on chain) and track last successful blocks etc. I focus on at least once delivery with idempotent event handling. Partitioning of filters is based on consistent hashes. Offsets are stored for each chain and partition in Postgres. If what you had existed with infura, I wouldn’t have built it. Kafka style consumer groups would be ideal.