• 0 Posts
  • 154 Comments
Joined 3 years ago
cake
Cake day: August 2nd, 2023

help-circle
  • I agree that matrix is a slow and buggy hot mess, but its issues mainly lie with scaling. As long as your instance is small it works well enough. Imo this is architectural and will never be fixed with synapse.

    As for no alternatives for discord. I think the problem is that people have come to expect a certain level of QoS with hosted services that are expensive to maintain for hobbyists (cdn, load balancing, nat traversal, ddos protection, etc). I think this is fundamental to how we’re abusing IP when it’s way past its prime and on life support using middle boxes. If we want to reclaim this space, the best way forward would be something like NDN, but the transition would be astronomical that nobody wants to do it.


  • StarDreamer@lemmy.blahaj.zonetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    12 days ago

    Our minds like to process entities/companies like Google as human beings, which allows us to assign emotions to these things. But the truth is, they are nothing but a glorified Chinese room experiment.

    People made the largest browser engine and operating system, not Google. Without people, the company is nothing. A company like Google is nothing but a set of self operating rules.

    I love/loathe Google just as much as I love/loathe my weekly /tmp cleaning cron job. Even if it accidentally nukes my files, it’s just doing as it’s designed to do.

    You design a system to maximize shareholder value, it will do exactly that without caring a single thing about human ethics.


    1. They were ALREADY scanning your data using AI… And banning people for simply discussing certain topics.
    2. How do people even notice these things? I honestly couldn’t tell. (Edit: okay, I forgot about the domain)

    Anyways, I’m trying to get people in specific vulnerable communities to switch to matrix. But the amount of people refusing to do so out of convenience (and even refusing to setup MFA or using different passwords for their online accounts, including discord) is staggering.


  • University “educator” here. There is a dramatic increase in students who are lacking in critical thinking, especially after COVID. I’m not referring to people who just bomb tests, but a complete lack of motivation/ability to do basic things without someone handholding them through the entire process.

    We’re seeing students completely refuse to solve basic equations X = Y + Z for advanced upper div computer science courses, or have trouble setting up a basic C/C++ template with very a detailed Readme guiding people through the whole process. We’re also seeing students zone out and blue screen when being guided through a homework question. (“Here’s the equation, where are the numbers in this question description, what happens if you change XYZ”. This is all being done in bite sized chunks). A lot of people only respond to traditional lecturing in a big hall and cannot/will not respond to any questions/reading materials. In these cases, I believe their standardized testing scores reflect their knowledge level accurately.

    This isn’t to say there aren’t good students. If you look at the overall distribution, there’s still a decent amount of good/smart students. It’s just that test results are no longer showing a bell curve these days. Usually, it’s a bell curve overlapped with a large tail that can consist up to 20-30% of a class.





  • Ethical concerns aside there is a difference between using AI to not have to hire artists/developers and using AI because someone can’t realize their vision because they do not have all the prerequisite skills.

    On one hand, you have companies using AI when they can absolutely hire a human to do something; on the other, there is someone who couldn’t have published anything without the assistance of such a tool.

    People have different passions, and not everyone can be good at art, programming, etc to create something amazing. The problem is when someone uses a tool as a clutch, or uses it to replace human expression of intention. Then it truly becomes a soulless worthless piece of crap.

    The best example is people in the scanlation scene that translate manga. It’s fine to use AI to remove the original text while NOBODY is fine with an AI translation. Why? Because redrawing line art is an activity that doesn’t require human expression (it’s more about preserving the original expression of the artist, not changing anything); while localization of text requires a human to interpret and express intent in a different cultural setting.



  • As someone who is in a relevant field (higher ed), the teachers are doing what they can.

    This past year I’ve had college students ask about the time during an exam because they can’t read the analog clock projected on the wall. If you can make it to 20 years old without realizing you’re missing a critical skill and learning it yourself, that’s also on you.

    We’re also seeing a lack of critical thinking skills and ability to retain information. People don’t remember things that were taught 1-2 semesters ago. Not that they need “a refresher”, but completely forget core concepts (such as forgetting what CPU caches are in an advanced architecture course). Then there’s tons of people who can recite every definition on an exam, but not take a step further to come to a conclusion on a problem. (Git revert reverts checked files, so if I run the command after committing a test file the file is gone and no test is executed).

    There is something wrong with students today. And I’m saying that as someone who just finished my undergrad during COVID. But the institutions are adapting by teaching things with less depth, which then dumbs down further education because they now have to re-cover everything from scratch…






  • I may be biased (PhD student here) but I don’t fault them for being as such. Ethics is something that 1) requires formal training 2) requires oversight 3) contains to are different to every person. Quite frankly, it’s not part of their training, never been emphasized as part of their training, and subjective based on cultural experiences.

    What is considered unreasonable risk of harm is going to be different to everybody. To me, if the entire design runs locally and does not collect data for Google’s use then it’s perfectly ethical. That being said, this does not prevent someone else from adding the data collection features. I think the original design of such a system should put in a reasonable amount of effort in stopping that. But if that is done then there’s nothing else to blame them about. The moral responsibility lies with the one who pulled the trigger.

    Should the original designer have anticipated this issue thus never took the first step? Maybe. But that depends on a lot of circumstance that we don’t know so it’s hard to predict anything meaningful.

    As for the more “harm than good” analysis, I absolutely detest that sort of reasoning since it attempts to quantify social utility in a pure mathematical sense. If this reasoning holds, an extreme example would be justifying harm to any minority group as long as it maximizes benefit for society. Basically Omelas. I believe a good quantitative reasoning would be checking if harm is introduced to ANY group of people, as long as that’s the case the whole is considered unethical.


  • This is common for companies that like to hire PhDs.

    PhDs like to work on interesting and challenging projects.

    With nobody to reign them in, they do all kinds of cool stuff that makes no money (e.g. Intel Optane and transactional memory).

    Designing a realtime scam analysis tool with resource constraints is interesting enough to be greenlit but makes no money.

    Once released, they’ll move on to the next big challenge, and when nobody is there to maintain their work, it will be silently dropped by Google.

    I’m willing to bet more than 70% of the Google graveyard comes from projects like these.




  • I keep hearing good things however I have not yet seen any meaningful results for the stuff I would use such a tool for.

    I’ve been working on network function optimization at hundreds of gigabit per second for the past couple of years. Even with MTU-sized packets you are only given approximately 200 ns for processing (this assumes without batching). Optimizations generally involve manual prefetching and using/abusing NIC offload features to minimize atomic instructions (this is also running on arm, where atomic fetch and add in gcc is compiled into a function that does lw, ll, sc and takes approximately 8 times the regular memory access time for a write). Current AI assisted agents cannot generate efficient code that runs at line rate. There are no textbooks or blogs that give a detailed explanation of how these things work. There are no resources for it to be trained on.

    You’ll find a similar problem if you try to prompt them to generate good RDMA code. At best you’ll find something that barely works, and almost always of the code cannot efficiently utilize the latency reduction RDMA provides over traditional transport protocols. The generated code usually looks like how a graduate CS student may think RDMA works, but is usually completely unusable, either requiring additional PCIe round-trips or has severe thrashing issues with main memory.

    My guess is that these tools are ridiculously good at stuff it can find examples of online. However for stuff that have no examples, it is woefully under prepared and you still need a programmer to manually do the work line by line.


  • As much as I hate the concept, it works. However:

    1. It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)

    2. It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.

    If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.