As I recall, Reddit really dragged their heels in implementing GDPR-mandated data checkouts, citing technical challenges and privacy issues, but I'm sure it was more about the technical challenges and laziness (old codebase that has kind of sucked since forever and they're not keen on touching it). This was when the law went into effect in 2018.
I requested archives of my data from Reddit as per GDPR a few weeks ago, and it's still pending. And the page said "oh, uh, we'll provide them within 30 days." ...which is well within the letter of the law, if not the spirit. Other sites I've requested my data from can provide it within days, usually.
All I can say as someone who's been perplexed about Reddit's tech side for a long time is that it's pretty damn emblematic of the whole site.
They might not have bothered to implement an automated setup just for EU & UK users, meaning it's an ad-hoc process each time. If they go over the 1 month you can head over to the ICO website and file a report.
The pitch for federalization gives the misleading impression that the system provides a universal (or at least portable) account mechanism. It seems to be a common point of uncertainty with users taking a look at the emerging platform.
From a usability standpoint, the sign-in process ought to be able to tell when someone is trying to join from a non-local instance, and provide means to redirect or authenticate them appropriately. Maybe something in the style of "Sign in with Google," only simpler.
I think upcoming versions of Lemmy/Kbin are attempting to improve link behavior to make it less likely to unintentionally stray from your home instance.
Like many others, I've been wondering "Hmm, where the heck do I get all the cute animal pictures now?"
...but the answer to that question was staring me right in the face.
I'd just do what I've always done if I want cute animal pictures.
I mean, Pinterest is right there.
Reject subredditery, embrace tradition.
I have a Mastadon, AND a Kbin now. I'm trying to sign up for different Federated services and link 'em all together. I'm loving this new protocol so much. It's quiet...
It feels similar to the early 2000's internet and I'm loving it.
Unfortunately, these are problematic when dealing with instances that are not your home instance. Any links to the post page will be absolute remote instance URLs, which means you cannot interact with the post (e.g. leave a comment). The URL really needs to be made relative to your home instance for that to work, but for the life of me, I cannot figure out how to fix that for a specific post. I can only fix the URL to the magazine/community itself and then hope to locate the post within it again.
If there is a way to get home instance-relative RSS feeds, I'm all ears! Failing that, I might work on a scraper that can take URLs of the form:
and generate RSS feeds out of them? But I don't want to reinvent the wheel if something like this is already possible?
It might also be useful to someone trying to write an app with a multireddit-type feature? I will definitely release source if I come up with anything.
This morning when I opened Infinity to check Reddit, I saw the announcement above: they're going subscription-only. Ironically enough, I couldn't scroll down to see the rest of the message including prices, if there were any. I also couldn't see if there was a button to close the message or start a paid subscription. I couldn't proceed to Reddit at all. My only option was to close the app completely. So I uninstalled it.
That's it for me using Reddit on mobile! Can't say I'll miss it much. But I added a LOT of content to Reddit that way, so it's their loss. Fuck you, spez!
I have auto update turned off for apps, so I didn't get the last update for Infinity and I can still use it to see Reddit, for now. Once it dies I'm out though. I can't imagine they will be able to get enough subscriptions to support the app, so I'm not really sure what the goal is here.
Even the people subbed to r/Infinity_For_Reddit are saying they won't buy a subscription. Wouldn't Infinity be racking up a huge bill from Reddit once the API change goes into effect? There's no way subscriptions will cover that so I don't understand why they're doing this.
If you're nuking your old reddit content, this might be important. For me, the reddit history visible on the website was far less comprehensive than the API could access.
As a 10+ year redditor, I would sometimes go back through my profile and delete stale or irrelevant content. Deciding to try a faster approach this week, I installed Redact (available at redact dot dev, or on the Google Play store). It lets you bulk delete, or preview things first, which I wanted to do in case there was anything worth preserving.
When scanning posts/comments, it first says it's sorting by new, then hot, then controversial.
The "new" results were the same as I could see on my profile, but then the "hot" and "controversial" scans found page after page of comments that I couldn't see on my u/ page. There were 50 results per page, and I didn't keep an accurate count, but I removed at least 1000 comments, mostly from 2013-2018, via the API.
No idea how many people this could help, so it seemed like a worthwhile first post on kbin.