

Agreed, It looks fairly easy to disassemble and clean. I would attempt that before replacing. You may find a YouTube vid or two showing how. Pvc thread tape is your friend. Dirt in the valve will stop it from closings properly as pressure builds.


Agreed, It looks fairly easy to disassemble and clean. I would attempt that before replacing. You may find a YouTube vid or two showing how. Pvc thread tape is your friend. Dirt in the valve will stop it from closings properly as pressure builds.
Agreed on your point. We need a way to identify those links so that our browser or app can automatically open them through our own instance.
I am thinking along the lines of a registered resource type, or maybe a central redirect page, hosted by each instance, that knows how to send you to your instance to view the post there.
I am sure it is a problem that can be solved. I would however not be in favour of some kind of central identity management. It is to easy a choke point and will take autonomy away from the instances.
That should just work. You view the post on your own instance and reply there. That reponse trickles to the other instances.
It may take a while to propagate though. The paradigm is close to that of the ancient nntp news groups where responses travel at the speed of the server’s synchronisation. It may be tricky for rapid fire conversation, but works well for comments of articles.


As a very long time reader of The Register, I actually enjoy their headlines. They have always had a tabloid style to them. Even before clickbait was a thing and I have seldom been disappointed at the contents of anything I have clicked on. So agreed, a quality site.
Arstechnica and The Register are my tow oldest daily reads.


The exciting thing about this space is that much of it is undefined. It is all about the protocols and the main features at the moment. The 2nd generation tools will be born out of what we discuss now and think about now.
How do you make sure a user is not trapped in his special interest bubble and still gets to see content that has everyone excited? How will we make use of the underlying data, on both posts and users to suggest and aggregate content.
I think there will be more than one solution eventually, different flavours of aggregators running on the same underlying data.
So much possibility. And we control it. If you don’t like the way your lemmy instance or kbin aggregates, choose another site or build your own. The data is there.


That is a good way to think about it. What is the need from the reader’s perspective and from the poster’s.
One would certainly read a post with low upvotes from a author with high reputation if you are interested in the specific magazine. I wonder if the reputation should not be topic bound and not just general. That would be useful from the reader’s perspective.


It is a thing to note about starting an IT business. Office space is an overhead you cannot afford. I was involved as a founder for a small custom dev and consulting company in the middle 2000s. We were about 10 and very distributed. Every Friday we would meet at a coffee shop with bottomless coffee 😉. Do our version of a standup and then get on with it for the week. Even then we managed with email and google chat and cell phones. The tooling is so much better today than then. So there is no reason to waste money on renting fancy spaces.
Some books just beg to be read again and again. I am on my 3rd copy of The Lord of the Rings, 2nd of Dune. The advent of good reading apps, like fbreader on Android saved my Ian M. Banks collection from a similar fate. That said my copy of The Algebrist is starting to show its age.
So yes rereading a good book can be fun.
Edit: Wrote this on mobile. The mobile U/I is not always clear as to the source magazine where the post came from, so I missed the Linux in there. Things are not as dire on Linux as on Windows for AMD, so my assessment may be a bit pessimistic. With AMD’s focus on the data centre for machine learning, the linux driver stack seems fairly well supported.
I spent the last few days getting stable defusion and pytorch working on my Radeon 6800 XT in windows. The machineml distribution of stable diffusion runs at about 1/4 of the speed of raw rocm when I compare it to the shark tooling, which supports rocm via docker on windows.
Expect tooling to be clinky and that you will need to compile everything yourself on linux. Prebuilt stuff will all be for Nvidia.
Amd is pushing hard into the ai space, but aiming at datacenter users. They are rumoured to be building rocm for their windows drivers, but when that will ship is anyone’s guess.
So right now, if you need to hit the ground running for your academic work, I would recommend NVidia, as much as it pains me, a long time AMD user.
I was looking for 1 in my old pc junk boxes, to show my 12 year old what they looked like. Not a single floppy survived.
I recently rediscovered the genre while looking for remixes of classic commodore 64 game tunes. Surprisingly effective as background while working.


It is like watching a slow train wreck. You know you should just look away, but you just cannot.
I have been using KBin as my primary source of social media since the reddit shittification got serious. Dude you are doing excellent work. I am a dev myself and this is just amazing. Thank You!
The vision is sometimes more important that arbitrary deadlines. Your life your call, but I would rather be patient and have the man with the vision in charge than having kbin fragment. Just my thoughts.
Thanks for all the hard work.