Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.

  • 0 Posts
  • 153 Comments
Joined 3 years ago
cake
Cake day: August 3rd, 2023

help-circle

  • I’m probably in the minority on this particular post. But this would NOT kill me.

    Ew mushrooms. I’d pick them all off and have a pile of poison on my plate’s edge.

    Assuming nothing leached out.

    I don’t fully understand my hatred, but mushrooms do not belong in food. My only exception so far, and I’ll try most things once, is wood ear mushrooms, which give me seaweed vibes and are just sort of chewy/crunchy.





  • Does that make this better? A translated French search query would be ‘joining video call isn’t working’ and that will return results for every conference tool known to man.

    Call it something like FVC (la France VisioConférence) , or some French play on the way that sounds, which would be a uniquely searchable term in this domain.

    This is not a hill I’m dying on, but it’s terminally short sighted and a bad user experience to name your product the same thing as a microslop trademark. They are the worst for this already with their multiple active variants of office 365 tools like outlook and their xbox name nonsense.

    Oh, I have a great idea for a new car company. Lets call it ‘Car’! Then people can have a Car Car, or maybe even a Car Car 2026… oh or a Car Truck when we branch out. (future google search: replace car truck 2028 oil filter)



  • korazail@lemmy.myserv.onetoComic Strips@lemmy.worldVanilla Ice
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    7 days ago

    The best ‘convert’ is the one that got there on their own. They’re already primed to believe us when we warn next time.

    We do harm by mocking the leopards-ate-my-face crowd when they finally catch on. Even if it is cathartic.

    There’s room to tell them “Oh that thing we warned you about actually happened? maybe we weren’t crazy”, but we should then welcome them in and guide them instead of a rude “I told ya so” and no empathy.




  • I’m 90% on-board with disliking these, but I can see uses for ‘Augmented Reality’ glasses. I just wish they worked the way they do in Sci-fi and video games.

    Lots of interactions we have on our phones could be done hands-free on a HUD

    automatic translation of text or voice when traveling navigation/directions and similar guidance, like automatic subway/train maps instant access to biometric data trends like heart rate, glucose levels and more

    I’ve also been part of a pilot to get a HUD to provide AR data to a manufacturing operator, showing things like line speed, temperature and other kinds of data they would otherwise have to go to a computer for. This was around the google glass era, though, and the devices were too pricey to justify and the tech wasn’t there yet.

    I do think these devices need to be more obvious. We called them glassholes when google was starting this wearable computing trend and people were using them inappropriately; and we’ve seen how any internet-connected camera like Ring and Flock can be abused.

    The concept of the personal HUD is useful, but it still needs workshopping to make it socially safe. Also, the ones like the Meta/Rayban glasses are just pervert tools. No AR, just a camera has no value other than creeping.


  • I’m certainly not a microslop supporter, but…

    They designed a system that recommended that the average user use full disk encryption as part of device setup, and then provided a way that Grandma could easily recover her family photos when she set it up with their cloud.

    This was built by an engineer trying to prevent a foreseeable issue. The intent was not malicious. The intent was to get more people more secure by default, since random hacker couldn’t compell ms to give them keys, while still allowing low tech literacy people to not get fucked.

    It’s been a while since I installed a new Windows OS, but I’m pretty sure it prompts you to allow uploading your bitlocker key. It probably defaults to yes, but I doubt you can’t say no, or reset the key post onboarding if you want the privacy, and now it’s on you to record your key. You do have to have some technical understanding of the process, though, which is true of just about everything.

    That all said, if a company has your data, it can be demanded by the government. This is a cautionary tale about keeping your secrets secret. Don’t put them in GitHub, don’t put them in Chrome, don’t put them online anywhere because the Internet never forgets.



  • The big difference is that smart phones and centralized internet are somewhat useful. Smartphones at least. Centralized internet… meh, but maybe a dependency.

    AI is useful in only very niche and intentional cases. A ‘generic’ LLM is pretty bad at almost everything.

    If ‘AI’ had been sold more like: “Give us a year of data samples from your production line and we can use ML to optimize time and temperature based on current weather patterns…” (real world use case I was working in on 2019) etc. then they would have really made the world better. Instead, I have crappy clippy constantly reading my email and suggesting words I wasn’t going to type*.

    • I don’t understand how corps accept the idea that their internal emails are no longer internal, since everything is sent to chatgpt/copilot/gemeni/etc as it’s created. Shouldn’t Legal have thrown a tantrum over this?!




  • Wrangling IDE cables with awkward angles so you couldn’t both see and touch the space at the same time. And the case edges were made of knives. And then, yeah, it wouldn’t boot and you’d have to figure out that your master/slave jumpers were incorrect as others have stated and have to remove, tweak and replace the drives.

    Good times.



  • I really like this comment. It covers a variety of use cases where an LLM/AI could help with the mundane tasks and calls out some of the issues.

    The ‘accuracy’ aspect is my 2nd greatest concern: An LLM agent that I told to find me a nearby Indian restaurant, which it then hallucinated is not going to kill me. I’ll deal, but be hungry and cranky. When that LLM (which are notoriously bad at numbers) updates my spending spreadsheet with a 500 instead of a 5000, that could have a real impact on my long-term planning, especially if it’s somehow tied into my actual bank account and makes up numbers. As we/they embed AI into everything, the number of people who think they have money because the AI agent queried their bank balance, saw 15, and turned it into 1500 will be too damn high. I don’t ever foresee trusting an AI agent to do anything important for me.

    “trust”/“privacy” is my greatest fear, though. There’s documentation for the major players that prompts are used to train the models. I can’t immediately find an article link because ‘chatgpt prompt train’ finds me a ton of slop about the various “super” prompts I could use. Here’s OpenAI’s ToS about how they will use your input to train their model unless you specifically opt-out: https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/

    Note that that means when you ask for an Indian restaurant near your home address, Open AI now has that address in it’s data set and may hallucinate that address as an Indian restaurant in the future. The result being that some hungry, cranky dude may show up at your doorstep asking, “where’s my tikka masala”. This could be a net-gain, though; new bestie.

    The real risk, though, is that your daily life is now collected, collated, harvested and added to the model’s data set; all without your clear explicit actions: using these tools requires accepting a ToS that most people will not really read and understand. Maaaaaany people will expose what is otherwise sensitive information to these tools without understanding that their data becomes visible as part of that action.

    To get a little political, I think there’s a huge downside on the trust aspect of: These companies have your queries(prompts), and I don’t trust them to maintain my privacy. If I ask something like “where to get abortion in texas”, I can fully see OpenAI selling that prompt to law enforcement. That’s an egregious example for impact, but imagine someone could query prompts (using an AI which might make shit up) and asks “who asked about topics anti-X” or “pro-Y”.


    My personal use of ai: I like the NLP paradigm for turning a verbose search query into other search queries that are more likely to find me results. I run a local 8B model that has, for example, helped me find a movie from my childhood that I couldn’t get google to identify.

    There’s use-case here, but I can’t accept this as a SaaS-style offering. Any modern gaming machine can run one of these LLMs and get value without the tradeoff from privacy.

    Adding agent power just opens you up to having your tool make stupid mistakes on your behalf. These kinds of tools need to have oversight at all times. They may work for 90% of the time, but they will eventually send an offensive email to your boss, delete your whole database, wire money to someone you didn’t intend, or otherwise make a mistake.


    I kind of fear the day that you have a crucial confrontation with your boss and the dialog goes something like:

    Why did you call me an asshole?

    I didn’t the AI did and I didn’t read the response as much as I should have.

    Oh, OK.


    Edit: Adding as my use case: I’ve heard about LLMs being described as a blurry JPEG of the internet, and to me this is their true value.

    We don’t need a 800B model, we need an easy 8B model that anyone can run that helps turn “I have a question” into a pile of relevant actual searches.