• 1 Post
  • 32 Comments
Joined 3 months ago
cake
Cake day: March 31st, 2025

help-circle
  • Those are some good nuances that definitely require a nuanced response and forced me to refine my thinking, thank you! I’m actually not claiming that the brain is the sole boundary of the real me, rather it is the majority of me, but my body is a contributor. The real me does change as my body changes, just in less meaningful ways. Likewise some changes in the brain change the real me more than others. However, regardless of what constitutes the real me or not, (and believe me, the philosophical rabbit hole there is one I love to explore), in this case I’m really just talking about the straightforward immediate implications of a brain implant on my privacy. An arm implant would also be quite bad in this regard, but a brain implant is clearly worse.

    There have already been systems that can display very rough, garbled images of what people are thinking of. I’m less worried about an implant that tells me what to do or controls me directly, and more worried about an implant that has a pretty accurate picture of my thoughts and reports it to authorities. It’s surely possible to build a system that can approximate positive or negative mood states, and in combination this is very dangerous. If the government can tell that I’m happy when I think about Luigi Mangione, then they can respond to that information however they want. Eventually, in the same way that I am conditioned by the panopticon to stop at stop sign, even in the middle of a desolate desert where I can see for miles around that there are no cars, no police, no cameras - no anything that could possibly make a difference to me running the stop sign - the system will similarly condition automatic compliance in thoughts themselves. That is, compliance is brought about not by any actual exertion of power or force, but merely by the omnipresent possibility of its exertion.

    (For this we only need moderately complex brain implants, not sophisticated ones that actually control us physiologically.)


  • I am not depressed, but I will never get a brain implant for any reason. The brain is the final frontier of privacy, it is the one place I am free. If that is taken away I am no longer truly autonomous, I am no longer truly myself.

    I understand this is how older generations feel about lots of things, like smartphones, which I am writing this from, and I understand how stupid it sounds to say “but this is different!”, but like… really. This is different. Whatever scale smartphones, drivers licenses, personalized ads, the internet, smart home speakers… whatever scale all these things lie on in terms of “panopticon-ness”, a brain implant is so exponentially further along that scale as to make all the others vanish to nothingness. You can’t top a brain implant. A brain implant is a fundamentally unspeakable horror which would inevitably be used to subjugate entire peoples in a way so systematically flawless as to be almost irreversible.

    This is how it starts. First it will be used for undeniable goods, curing depression, psychological ailments, anxiety, and so on. Next thing you know it’ll be an optional way to pay your check at restaurants, file your taxes, read a recipe - convenience. Then it will be the main way to do those things, and then suddenly it will be the only way to do those things. And once you have no choice but to use a brain implant to function in society, you’ll have no choice but to accept “thought analytics” being reported to your government and corporations. No benefit is worth a brain implant, don’t even think about it (but luckily, I can’t tell if you do).




  • It’s crazy how Lemmy is still small enough that one user can have such massive cultural reach to tens of thousands of people just by posting a lot. The Picard Maneuver must be one of the, if not the most prolific Lemmy posters. I don’t even feel comfortable responding to their comment in this chain, like I’m not worthy of an audience with the Content King



  • I think there is a substantial difference though. Meat processing is done in a measured, considered way for a benefit (meat) that cannot be obtained without killing the animal. It is done in isolated facilities away from people who find the process disturbing. Just because people find something gross doesn’t mean it shouldn’t be done - we have sewage maintenance done out of the public eye too - but it does maybe mean it should be done where people don’t have to see it. The only benefit this man gets from killing the animal is some sort of “revenge”. But this is in principle completely contradictory to meat processing, where animals are seen as less capable of higher order experiences and therefore more acceptable to kill. To seek revenge, you would need to be assigning more higher order experience to the seagull than we typically see it as having. You have to see the seagull as selfish, stealing, criminal, rude, etc., even though in reality a more reasonable person understands that it’s just an animal looking for food. Meat processing is not done out of some emotional vendetta against the animals, rather it is the cold detachment of it that is exactly what makes it acceptable. Can you imagine if we killed the same amount of chickens every day, not to eat them, but just because we hate them? This is much more horrifying! Because that would mean we think chickens are having complex enough inner experiences to warrant hatred, yet still we kill them.

    Meat processing maybe isn’t great, but it’s still much better than this seagull killer. It isn’t impulsive, it isn’t disproportionate in response to the situation, it acknowledges and conceals its own horrors; thereby paying respect to important social codes. The actions of this man, though, disregarded the well-being of children and others around him, in an impulsive and disproportionate response - your average meat-eater is indeed better than that, I think. When I have a craving for some meat, I don’t drag a calf down to the nearest playground, cut it in half and spray blood over the children, and proceed to mock the calf’s weakness and inferiority as I beat it to tenderize it before consumption. I just want some food, dude. But what’s this guy’s beef? It’s not beef, and it’s not even seagull meat, but rather some frightening notion of swift and decisive revenge, which reveals that he is just waiting for any excuse to get away with brutalizing things around him.


  • Ugh. Yes. The gotchas are totally a pacifier.

    “Hmm, nice genocide. Unfortunately for you I found this Twitter post you made ten years ago which proves you are acting hypocritically. Another victory for my cause!”

    Like…you have achieved nothing by identifying and publicizing these hypocrisies. Only very logical people care about hypocrisy. Most people just want what they want, and will pick up any reason for it as long as it comforts them, then discard that reason and replace it with a mutually exclusive one later as needed. The only people that care about hypocrisy are logical, and as a general rule, the logical people already agree with you.




  • Sorry, I can see why my original post was confusing, but I think you’ve misunderstood me. I’m not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement “AI doesn’t actually reason, it just follows patterns”. That is unscientific if we don’t know whether or “actually reasoning” consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It’s my personal, subjective feeling that human reasoning works by following patterns. But I’m not saying “AI does actually reason like humans because it follows patterns like we do”. Again, I see how what I said could have come off that way. What I mean more precisely is:

    It’s not clear whether AI’s pattern-following techniques are the same as human reasoning, because we aren’t clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn’t we have studies to back up the direction we lean in one way or the other?

    I think you and I are in agreement, we’re upholding the same principle but in different directions.


  • But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.

    As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.




  • I do think the real world has some differences that make it more difficult. Mostly that whoever is coordinating the larger groups is very likely to have access to more power and resources and therefore is corruptible. And then that’s one of the systems that brings about that Pareto distribution sort of imbalance among people. Some inequality in terms of power is not destructive, but too much is almost guaranteed to end badly. But online, the sort of power and resources that are accrued are ultimately just less likely to eventually reach a point of being able to exert full control over the smaller layers of the community. I mean sure, someone could start acting despotic with their own “fiefdom” as another commenter aptly put it, like has sometimes happened with open source repositories or forums, but it’s hard for someone’s website to get so popular that they’re somehow able to directly force changes upon your website (not impossible, I know).


  • Yes, I like it smaller! Ideally you have a sort of fractal structure of a bunch of smaller, tighter communities, which are also bound up in larger but looser communities. Then you can get the benefits of broad exposure and resource sharing from large communities, as well as the benefits of meaningful individual engagement and respectful kinship from smaller communities. I think that personal sites along with forums and the rest of the Internet really can do a great job of bringing this about.

    As with many things, the responsibility ultimately lies on the individual to protect themselves and resist falling into bad patterns. Most primarily, maintaining your small community takes effort, and it’s much easier to just be a passive part of a very large community that subsists on infrequent uninvested involvement from many people. It’s even easier to be part of a “community as a service” like Facebook, Instagram, Tiktok, etc. where all the incentives behind community building responsibilities have been supplemented with real income or fame. But of course then the people making posts, suggesting ideas, steering trends, managing communities, etc. are all in it for reasons that are not necessarily aligned with the well-being of community members. Hence the platform becomes a facade of a healthy community. Really good community upkeep seems to need to be done out of a love for the community, and any income you collect is to support that, rather than the other way around. But love for a community is often not sufficient fuel to power someone to serve huge groups out of the goodness of their heart, when they don’t even know 99% of the members. Not to mention that even if someone really is that altruistic and empathetic, the time and resources become unfeasible. So ultimately, a fractal model or an interleaved model seems to be the only one that could work.

    Don’t get me wrong. Large communities are awesome in their own ways and have their own benefits. They have more challenges, though. Ultimately the best way to build a good large community is by building a good small community.


  • Everyone lamenting this needs to check out neocities, or even get into publishing your own website. Even if it’s on a “big evil” service like GoDaddy or AWS, whatever. As long as it’s easy for you. Or learn to self host a site. The internet infrastructure itself is the same, but now we have faster speeds, which means your personal sites can be bigger and less optimized (easier for novices and amateurs to create). People still run webrings, people still have affiliate buttons, there’s other ways to find things than search engines, and there’s other search engines than the big ones anyways.

    There are active communities out there that are keeping a lot of the old Internet alive, while also pushing it forward in new ways. A lot of neocities sites are very progressive. If you have an itch for discussion, then publish pages on your website in response to other people’s writings, link them, sign their guestbook.

    Email still exists. I have a personal protonmail that I use only for actually writing back and forth to people, I don’t sign up for services with it aside from fediverse ones. People do still run phpbb style forums, too. You’ll find some if you poke around the small web enough.

    A lot of these things are not lost or dead. They just aren’t the default Internet experience, they’re hard to find by accident. But they are out there! And it’s very inspiring and comforting.