I find it super convenient.
Also, it doesn't have a limit. Pretty sure I bought my last car with contactless on my phone, but that was years ago.
towerful
I also carry a wallet? Cause, yknow, ID and stuff.
Phone is just way more convenient. Especially since I don't have a limit on its contactless amount. Whereas with my card, I would have to chip&pin for anything over £40
The only reason I stopped using grapheneOS was because Google contactless payment didn't work.
Loved everything else about graphene tho
I thought T568B at each end was standard practice these days
I've played it fine on steam proton on arch
Gotta research dividends!
Edit:
Maybe Boeing made a simple mistake and invested in Relaxation & Dividends, instead of Research & Development.
Pretty sure cargo dragon is just a stripped down crew dragon to make more space for cargo.
Or maybe, crew dragon is a cargo dragon fitted for passengers... Seeing as cargo dragon flew with cargo and docked to the ISS in 2012 (crew dragon was 2020).
Pretty sure crew dragon has all the auto/remote to fully launch and then dock to the ISS.
Cargo dragon is auto/remote docked. Doesn't even need canadarm. So would make sense that crew dragon is as well
Older games for specific older console hardware were specifically designed.
It leveraged specific features of that hardware.
They literally hacked the consoles they were releasing on to get their desired results.
And because it's consumer gaming hardware/software neither backwards compatible nor forward compatibility for all the stuff the pulled were ever built in. So a game would have to target multiple platforms to actually release on multiple platforms .
It's like why so many games don't run Mac OSX. "Why don't they just release windows software for free on Mac OSX?". Because it needs to be redesigned to work on OSX, which costs money.
Everything up to, what, PS4? is probably specifically tailored to that specific hardware. Games that released on PS3 and xbox-whatever would have some core software dev team, then hardware specific developers. It would be targeted for the target hardware.
At some point, things like Unity and Unreal Engine took over, with generic code and targeted compiling. Pretty much (not quite) allowing developers to "just hit compile", and release to multiple architectures.
Any official re-release of Nintendo games have generally been on an emulated system. Where they have developed that emulation to work with the original software.
There are some re-releases, where the game has essentially been rebuilt from the ground up, using original assets but to work with modern (and flexible) game engines.
Both of these have a lot of work, so not free. Worth $60 or whatever Nintendo charges? Meh, that's competing with real games.
If you own (or buy) a nes/snes/N64 cart, you can rip it. There are plenty of ways.
It's not the source, but it's what it compiles to. And you can reverse engineer the source, then adapt it to modern game engines. There are a few open source projects that do this. Their quality varies.
Or you can build an emulator to run that software, as if it was the original hardware - an emulator.
Nintendo can skip the rip, decompile and reverse engineering steps. They likely have access to the source code, and the actual design specs for the hardware (not just what they tell developers - who then hack the hardware anyway)
All of this requires a LOT of work. So a sellable product from someone like Nintendo requires a lot of investment.
Emulators are good. Any used for speedrun leaderboards on equal footing to actual hardware (ie times are similar, even if they are different categories) will be good enough that you wouldn't know.
A reminder that Georgia (state) was a part of the whole 2020 election denial bullshit.
Georgia (country) has a seemingly left leaning president (wanting to join EU), but with a parliament seemingly working against them (eg overturning veto of the controversial Foreign Agent law).
This is a very very broad outsiders opinion. I'd love to hear from a variety of people living in Georgia, and what they reckon!
Follow sensible H&S rules.
Split the responsibility between the person that decided AI is able to do this task and the company that sold the AI saying it's capable of this.
For the case of the purchasing company, obviously start with the person that chose that AI, then spread that responsibility up the employment chain. So the manager that approved it, the managers manager, all the way to the executive office & company as a whole.
If investigation shows that the purchasing company ignored sales advice, then it's all on the purchasing company.
If the investigation shows that the purchasing company followed the sales advice, then the responsibility is split, unless the purchasing company can show that they did due diligence in the purchase.
For the supplier, the person that sold that tech. If the investigation shows that the engineers approved that sales pitch, then that engineers employment chain. If the sales person ignored the devs, then the sales employment chain. Up to the executive level.
No scape goats.
Whatever happens, C office, companies, and probably a lot of managers get hauled into court.
Make it rough for everyone in the chain of purchase and supply.
If the issue is a genuine mistake, then appropriate insurance will cover any damages. If the issue is actually fraud, then EVERYONE (and the company) from the level of handover upwards should be punished
These days, I just use postgres for my projects.
It's rare that it doesn't do what I need, or extensions don't provide the functionality. Postgres just feels like cheating, to be honest.
As for flavour, it's up to you.
You can start with an official image. If it is missing features, you can always just patch on top of their docker image or dockerfile.
There are projects that build additional features in, or automatic backups, or streaming replication with automatic failover, or connection pooling, or built in web management, etc
Most times, the database is hard coded.
Some projects will use an ORM that supports multiple databases (database agnostic).
Some projects will only use basic SQL features so can theoretically work with any SQL database, some projects will use extended database features of their selected database so are more closely tied to that database.
With version, again, some features get depreciated. Established databases try to stay stable, and project try and use databases sensibly. Why use hacky behaviour when dealing with the raw data?!
Most databases will have an LTS version, so stick to that and update regularly.
As for redis, it's a cache.
If "top 10 files" is a regular query, instead of hitting the database for that, the application can cache the result, and the application can query redis for the value. When a new file is added, the cache entry for "top 10 files" can be invalidated/deleted. The next time "top 10 files" is requested by a user, the application will "miss" the cache (because the entry has been invalidated), query the database, then cache the result.
Redis has many more features and many more uses, but is commonly used for caching. It's is a NoSQL database, supports pub/sub, can be distributed, all sorts of cool stuff. At the point you need redis, you will understand why you need redis (or nosql, or pub/sub).
For my projects, I just use a database per project or even per service (depending on interconnectedness).
If it's for personal use, it's nice to not worry about destroying other personal stuff by messing up database stuff.
If it's for others, it's data isolation without much thought.
But I've never done anything at extremely large scales.
Last big project was 5k concurrent, and I ended up using Firebase for it due to a bunch of specific requirements