

Great


Great


And doing things faster is not the same as doing better, but often the opposite


There’s also https://html.duckduckgo.com/. It’s like the main page, but without javascript.


Reducing the albedo of some area just to disperse the captured energy for no utility (ai) is still harmful to the environment and contributes to earth’s energy imbalance. Solar energy is great when it replaces fossil fuel emissions, not when it’s just wasted.


Glad to be helpful ^ - ^


I have this other bookmarked link that may be helpful:
https://osgameclones.com/


I have this list bookmarked. Maybe it can help:
https://github.com/bobeff/open-source-games#business-and-tycoon-games


Let’s now test the inertia of tiktok users. I know a lot who feel like they’re the bastion of the resistance against fascism and love to call x users fascists. Now tiktok has turned fascist as well. Let’s see if they will get out from there in substantial numbers


deleted by creator


It must be really demanding in terms of hardware and network to host video content like tiktok. I wonder if the fediverse voluntaries will be able to keep up with it if the userbase grows too much


I wish those laws backfire and just make people abandon social media instead
Or we could simply skip that and hold the corporations accountable for all the damage they’re doing


Have we ever been to a quiet place? Sometimes we only realize something is messing with us when we feel the relief when it stops


some people (…) are asking “can you game on DDR3“? The answer is a shocking yes.
“shocking”. Really?
Browsing the internet as a third worlder always give me these eye-rolling moments. Sigh…
I don’t see people around me seeing the corporations as evil due to them humanizing the machines, but the opposite: I see people talking to machines and taking advice as if they were humans talking to them, making them create some form of affection for the models and the corporations. I also see court decisions being biased by attributing human perspective to machines
Like really, if I hear someone else in my university talking about the conversation they had with their “friend”, I will go crazy
One possibility:
While many believe that LLMs can’t output the training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models…
Note that this neutral language makes it more apparent that it’s possible thal llms are able to output the training data, since it’s what the model’s network is build upon. By using personifying language, we’re biasing people into thinking about llms as if they were humans, and this will affect, for example, court decisions, like the ones related to copyright.
Hmm now I wonder what’s in the 2 bad corners…
While many believe that LLMs do not memorize much of their training data
It’s sad that even researchers are using language that personifies llms…
Do you mean telegram as in the app that stores all chats in plain-text by default and uses a not recommended cryptography method?
A lot of people are too alienated from everything, or can’t make relationships between brands and what’s behind them, because everything not directly visible in front of us is too abstract for them, and there’s those who are like “all brands are terrible, it won’t make any difference anyway”. Well, and there’s the nazi sympathizers…