• 0 Posts
  • 44 Comments
Joined 7 months ago
cake
Cake day: July 14th, 2025

help-circle
  • LSAG is a good shout but I’m not sure it’s sufficient. It enables anonymous verification of something against a set of known public keys. But you still need to make sure that set of public keys is coming from real humans. It’s not proof that a user has a property (i.e. being human), it’s just proof they are a user.

    But yes this is sort of a digression from the actual main problem. The real anti-bot solution is a mix of methods imo.


  • Maybe we can agree to disagree because I don’t think a specific demographic is enough to overcome the negative network effect at the start. The problem, imo, is that the attrition rate of dating apps is really high and dating apps are only good if a lot of people are located geographically nearby. You either need broad appeal to avoid running out of people early on or a demographic that is unusually geographically concentrated and usurps the attrition rate (ENM comes to mind for the latter).

    Of course, you could always make something for dating without the geo proximity, but I think most people won’t want to use something like that at all.

    The beauty of new FOSS projects is that they’re quite often hosted and developed for free, so I don’t think that’s much of a limiting factor as long as the community is there. That’s also why I think it’s important to make it big quickly, because that’s the way to get a big enough community before the creator loses interest.



  • NGram@piefed.catoOpen Source@lemmy.mlFirst Open Source dating app
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 day ago

    Other apps do have some good anti-bot measures which could be adopted for a FOSS project. The problem with a lot of cryptographic solutions for this is that often cryptography is usually more about proving your identity more than proving something about your identity. Tor is also focused on privacy from middle-men, which doesn’t really make sense for a dating app.

    I think the challenge boils down to how to prove you’re human without biometrics or other PII. And I think the sad reality is that you can’t prove it. Though you may be able to prove you have unique PII with some sort of zero-knowledge proof…


  • Unfortunately I think projects like this have extra challenges over even regular social media platforms. There’s also retromeet which seems even more dead (it may have not even made it to a stable release).

    The idea is great but for a dating app to work, it needs to quickly get past two network effects: the global network effect (there must be enough people globally, or in a larger region, to get other people interested in trying out the platform) and the local network effect (there must be enough people to match with in most users’ local areas to keep enough people interested). With corporate backing that’s easy enough to do with a dedicated team to market and develop, but FOSS rarely has that sort of manpower. Slow growth is hard too, since users tend to leave dating apps quite often.

    There’s also the funny problem if the dev gets a partner usually the partner doesn’t appreciate them staying on dating apps. Developing a dating app could be even worse for the relationship… actually now that I think of it, maybe I should make start a similar project since I don’t like dating…








  • This reads like LLM slop.

    According to technical reports from Phoronix, the milestone was reached by Alyssa Rosenzweig, a key figure in the graphics driver development for the Asahi Linux project.

    The linked Phoronix article (published yesterday) credits Michael Reeves, noopwafel, and Shiz and does not mention Alyssa Rosenzweig at all.

    The speed at which the M3 was tamed—booting into a KDE Plasma desktop environment so soon after the hardware’s retail release—

    The M3 is two generations old at this point…

    Booting a kernel is one thing; rendering a fluid graphical user interface is entirely another. The M3 achievement is particularly notable because it involves the GPU, historically the most obfuscated component of any System on Chip (SoC).

    Again, the Phoronix article (and its linked Xwitter post) completely contradict this, saying instead the rendering is done with “LLVMpipe CPU-based software acceleration”. The GPU is only involved in so far as is necessary to send data to the display.

    This article is misinformation, which is against this community’s rules.




  • The main post is already badly downvoted so I probably shouldn’t even bother to engage, but this whole article is actually just showing a lack of knowledge on the subject. So here goes nothing:

    Corporations have been running algorithms for decades.

    Millennia*. We can run algorithms without computers, so the first algorithm was run way earlier than decades ago. And corporations certainly were invented before the last century.

    Markets weren’t inefficient because technology didn’t exist to make them efficient. Markets were asymmetrically efficient on purpose. One side had computational power. The other side had a browser and maybe some browser tabs open for comparison shopping.

    I suppose the author has never used all of those price-watching websites that existed before 2022. I also question how they think a price optimization algorithm is useful to a person who is trying to buy, not sell, something.

    Consider what it took to use business intelligence software in 2015. […] Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.

    You still need to structure your data because you need to be able to have the LLM understand the structure of your data. In fact, it is still easy enough to cause an LLM to misinterpret data that having inconsistently-structured data is just asking for problems… not that LLMs are consistent anyway. The existence of the idea of prompt engineering means that the interface isn’t just conversation.

    The moment ChatGPT became public, people started using it to avoid work they hated. Not important work. Not meaningful work. The bureaucratic compliance tasks that filled their days without adding value to anything.

    Oh ok better just stop worrying about that compliance paperwork because the author says it’s worthless. Just dump that crude oil directly on top of the nice ducks, no point in even trying to only spill it into their pond.

    Compliance tasks are actually the most important part of work. They are what guarantee your work has worth. Otherwise you’re just an LLM – sometimes producing ok results but always wasting resources.

    People weren’t using ChatGPT to think. They were using it to stop pretending that performance reviews, status update emails, and quarterly reports required thought.

    Basically, users used it to create the layer of communication that existed to satisfy organizational requirements rather than to advance any actual goal.

    Once again with the poor examples of things. If you can’t give a thoughtful performance review for the people who work below you, you’re just horrible at your job. Performance reviews aren’t just crunching some numbers and giving people a gold star. I’m sure sometime in the future I could pipe in all of the quick chats I’ve had with coworkers in the office and tell an LLM to consider them for generating a review, but that’s still not possible. So no, performance reviews do actually require thought. Status emails and quarterly reports can be basically summarizing existing data, so maybe they don’t require much thought but they still require some. This is demonstrable by the amount of clearly LLM-generated content that have become infamous at this point for containing inaccurate info. LLMs can’t think, but a thinking human could’ve reviewed that output and stopped that content from ever reaching anyone else.

    This is very much giving me the impression the author doesn’t like telling others what they’re doing. They’d rather work alone and without interruption. I worry that they don’t work well in teams since they lack the willingness to communicate with their peers. Maybe one day they’ll realize that their peers can do work too and even help them.

    You want the cheapest milk within ten miles? You can build that.

    The first search result for “grocery price tracker” that I found is a local tracker started in 2022, before LLMs.

    You want to track price changes across every retailer in your area? You can do that now

    From searching “<country> price tracker”, I found Camel^3 which is famous for Amazon tracking and another country-specific one which has a ToS last updated in 2018. The author is describing things that could already be accomplished with a search engine.

    You want something to read every clause of your insurance policy and identify the loopholes?

    Lmao DO NOT use an LLM for this. They are not reliable enough for this.

    You want an agent that will spend forty hours fighting a medical billing error that you’d normally just pay because fighting it would cost more in time than the bill? You can have that.

    You know what? I take it all back, this is definitely proving Dystopia Inc. But seriously, that is a temporary solution to a permanent problem. Never settle for that. The real solution here is to task the LLM with sending messages to every politician and lobbyist telling them to improve the system they make for you.

    The marginal cost of algorithmic labor has effectively collapsed. Using a GPT-5.2–class model, pricing is on the order of $0.25 per million input tokens and about $2.00 per million output tokens. A token is roughly three-quarters of a word, which means one million tokens equals about 750,000 words. Even assuming a blended input/output cost of roughly $1.50 per million tokens, you can process 750,000 words for about $1.50. War and Peace is approximately 587,000 words, meaning you can run an AI across one of the longest novels ever written for around a dollar. That’s not intelligence becoming cheaper. That’s the marginal cost of cognitive labor approaching zero.

    Nevermind the irony of calling computers doing work “algorithmic labour”, this is just nonsense. Of course things built entirely on free labour are going to be monetarily cheap. Also, feeding War And Peace into an LLM as input tokens is not the same as training the LLM on it.

    We are seeing the actual cost of LLM usage unfold and you’d have to be willingly ignoring it to think it was strictly monetary. The social and environmental impact is devastating. But since the original article cites literally none of its claims, I won’t bother either.

    Institutions built their advantages on exhaustion tactics. They had more time, more money, and more stamina than you did. They could bury you in paperwork. They could drag out disputes. They could wait you out. That strategy assumed you had finite patience and finite resources. It assumed you’d eventually give up because you had other things to do.

    An AI assistant breaks that assumption.

    No, it doesn’t, unless you somehow also assume that LLMs won’t also be used against you. And you’d have to actually be dumb or have an agenda that required you to act dumb to assume that.

    Usage numbers tell the story clearly. ChatGPT reached 100 million monthly active users in two months. That made it the fastest-growing consumer application in history. TikTok took nine months to hit 100 million users. Instagram took two and a half years. The demand was obviously already there. People were apparently just waiting for something like this to exist.

    Here’s a handy little graph to show how the author is wrong: Time to 100M users. I’m sorry, I broke my promise about not citing anything. Notice how all of the time spans for internet applications trend downwards as time increases. TikTok took 9 months 7 years before ChatGPT was released. I bet the next viral app will be even faster than ChatGPT. That’s not an indicator of demand, that’s an indicator of internet accessibility. (I’m ignoring Threads because they automatically create 100M users from their Instragram accounts in 5 days, which is a measure of their database migration capabilities and nothing else.)

    Venture capital funding for generative AI companies reached $25.2 billion in 2023 according to PitchBook data. That was up from $4.5 billion in 2022. Investment wasn’t going into making better algorithms. It was going into making those algorithms accessible.

    I’m sorry, what? LLMs are an algorithm. Author clearly does not know what they are talking about.

    DoNotPay, an AI-powered consumer advocacy service, claimed to help users fight more than 200,000 parking tickets before the company pivoted to other services. LegalZoom reported that AI-assisted document preparation reduced the time required to create basic legal documents by 60% in 2023.

    I thought LLMs were supposed to be some magic interface for individuals. The author is describing institutions. You know, the thing the author started out bashing for controlling all the algorithms and using them against the common folk who didn’t have those algorithms. This is exactly the same thing, just replace algorithm with AI.

    The credential barrier still exists. You can’t get a prescription from ChatGPT. The legal liability still flows through licensed professionals. The system still requires human gatekeepers. The question is how long those requirements survive when the public realizes they’re paying $200 for a consultation that an AI handles better for pennies.

    Indeed, that will be an interesting thing to see once AI can actually handle it better and for cheaper. Though I wouldn’t count on in anytime soon. Don’t forget the AI at that stage will still have to compensate the human doctors who wrote the data it was trained on.

    Oh, I just about hit the character limit. I guess I’ll stop there.
    Remember folks, don’t let your LLM write an article arguing for replacing everyone with LLMs. All it proves is that you can be replaced by an LLM. Maybe focus on some human pursuits instead.



  • From my, admittedly limited, interaction with mathematicians in my life and a bit of extrapolation:

    1. Academia: teach advanced mathematics and do research in mathematics for a university. There’s still lots of unsolved problems in math and also plenty of overlap with computer science, which also has lots of research possibilities
    2. Public sector: governments of all levels need at least statisticians, if not more specific mathematics skills depending on what they’re trying to do (e.g. research, engineering, economics, etc.)
    3. Private sector: lots of engineering companies employ a few mathematicians or at least physicists who are really good at math to make sure their next bridge/plane/ocean-boiler will actually work

    There’s a lot of overlap between all three but I roughly split them up based on where I’d expect the majority of jobs like that would be (e.g. I’m sure NASA employs a good deal of mathematicians, but so does Lockheed Martin and friends). Also a lot of people get a degree in mathematics and then specialize further with a masters and/or doctorate in computer science or physics, since both of those can be quite math-heavy and are better-funded fields.