wakeupcall 34 minutes ago

Great analysis, but I'm even more surprised to discover that producthunt is a "real" website at all.

I blocked PH with ublacklist a long time ago for looking like SEO promotion/garbage and looking too much like those "VS/comparison/best 5 apps" websites with next to zero content. These pop out faster than what I can filter by hand.

After checking it out again and knowing it is not purely-generated content, I _STILL_ don't see the value proposition if I stumbled on such a result.

imiric 18 hours ago

I do wonder if ProductHunt uses any CAPTCHA solution.

In spite of the flack that CAPTCHAs usually get, I still think they have a lot of value in fighting the majority of these spam attacks.

The common criticisms are:

- They impact usability, accessibility and privacy. Users hate them, etc.

These are all issues that can be improved. In the last few years there have been several CAPTCHAs that work without user input at all, and safeguard user privacy.

- They're not good enough, sophisticated (AI) bots can easily bypass them, etc.

Sure, but even traditional techniques are useful at stopping low-effort bots. Sophisticated ones can be fought with more advanced techniques, including ML. There are products on the market that do this as well.

- They're ineffective against dedicated attackers using mechanical turks, etc.

Well, sure, but these are entirely different attack methods. CAPTCHAs are meant to detect bots, and by definition, won't be effective against attackers who decide to use actual humans. Websites need different mechanisms to protect against that, but those are also edge cases and not the main cause of the spam we see today.

  • a2128 a few seconds ago

    There is a whole industry of CAPTCHA solving services that mostly use humans in places where labor is cheap. Prices per reCAPTCHA vary somewhere between $0.001 to $0.002 on one of the popular ones. It doesn't require much sophistication to use it. For around $50/year you can spam a website with 100 comments per day, assuming it requires a CAPTCHA to be solved per comment. This pricetag may leave the average script kiddie out of the game, but if your spam is earning you money somehow then this becomes easily profitable. I don't believe these services are "edge cases"

  • Terr_ 13 hours ago

    Lately I've been pondering how one might create a "probably a human"/skin-in-the-game system. For example, imagine visiting an "attestor" site where you can make a one-time donation of $5 to a charity of your choice, and in exchange it gives you some proof-you-spent-money tokens. Those tokens can be spent (burned) by some collaborating site (e.g. HN) to mark your account there as likely a human, or at least a bot whose owner will feel pain if it is banned.

    This would be far more privacy-preserving that dozens of national-ID lookup systems, and despite the appearance of "money for speech" it could actually be _cheaper_ than whatever mix of time and bus-fare and paperwork in a "free" system.

    ____________

    I imagine the big problems would be things like:

    * How to handle fraudulent payments, e.g. someone buying tokens with a stolen credit card. Easiest fix would be some long waiting-period before the token becomes usable.

    * How to protect against a fraudulent attestor site that just takes your money, or one whose tokens are value-less.

    * How to protect against a fraudulent destination site that secretly harvests your proof-token for its own use, as opposed to testing/burning it properly. Possible social fix: Put in a fake token, if the site "accepts" then you know it's misbehaving.

    * Handling decentralization, where multiple donation sites may be issuing their own tokens and multiple account-sites that may only want to support/trust a subset of those tokens.

    • brian_cunnie 10 hours ago

      > you can make a one-time donation of $5 to a charity of your choice ...

      The Alcoholics Anonymous San Francisco website had to implement CAPTCHAs on their website because scammers were making one-time donations to make sure their stolen credit cards were still valid. Every morning we had to invalidate a dozen obviously-fake donations.

      • LorenPechtel 9 hours ago

        Yup. Charitable donations are a way to spend money without it pointing to you and thus a common test for a stolen card.

      • Terr_ 9 hours ago

        Definitely an issue. I don't really like the idea of long-term Patreon-eseque relationship between the individual user and the attestor/issuer site, but it could be done. The charitable giving is more of a means-to-and-end than a goal, functioning as a kind of "observed spending" which is harder to fake than, say, buying something from yourself on ebay.

        If tokens had to mature for X days before being used that could deter laundering pretty handily, but stopping "tests" of cards would require hiding payment errors from the user for a certain period... which would not be a great experience.

    • squigz 5 hours ago

      As a poor disabled citizen who also cares about privacy and freedom, I haven't heard a single idea for attestation that doesn't scare the shit out of me. But then, I'm a poor, disabled citizen, so my opinion doesn't hold much weight.

    • esperent 6 hours ago

      It seems to me that ideas like this are unworkable due to income inequality.

      $5 isn't much for a wealthy westerner. It's a reasonable amount for an unemployed westerner. It's 12% of their weekly budget for someone earning median wage ($160/month) in Vietnam. But if you put in place regional pricing, it'll be cheap enough that spammers will just operate out of low income countries and buy thousands of cheap accounts.

      • Terr_ 6 hours ago

        > It seems to me that ideas like this are unworkable due to income inequality.

        There's no reason you can't have an attestation entity that's based on volunteer hours, provided you can convince sites-in-general that your proof-units are good.

        The core theme isn't about cash, but that:that:

        1. There are kinds of activity someone can do which demonstrates some kind of distinct actual expenditure of time or effort (not self-dealing.)

        2. A trusted agent could attest to that activity.

        3. Requiring (proof of) activity gives you an decent way to ward off the majority of bots/spam in a relatively simple way that doesn't become a complex privacy nightmare.

        It's a similar outcome to sending CPU-bound challenges to a client, except without the deliberate-waste and without a strong bias towards people who can afford their own fast computer.

        • nkrisc 2 hours ago

          Is still completely asymmetric in that the poorer you are, the harder you have to work to now simply access the same resources online as everyone else.

          And the poor get poorer.

          • tim333 an hour ago

            Well not in places like Vietnam (7% gdp growth/year) or a lot of the less developed world.

        • Woeps 2 hours ago

          The issue is that it another system that puts the "blame" and/or work people instead of dealing with the root cause. So in that case nothing changes.

          Because I wonder how are people going to do volunteering hours, get it recognized trough red tape/bureaucracy if they're already struggling to survive.

    • mandibles 12 hours ago

      Have you checked out the L402[0] protocol?

      It's basically using the HTTP 402: Payment Required status code and serving up a Lightning Network payment invoice.

      Edit to add: it basically solves all of the caveat issues you identified.

      [0]: https://l402.org/

    • doctorpangloss 13 hours ago

      > Lately I've been pondering how one might create a "probably a human"/skin-in-the-game system.

      This has the same energy as the "we need benchmarks for LLMs" startups. Like sure it's obvious and you can imagine really complex cathedrals about it. But nobody wants that. They "just" want Apple and Google to provide access to the same APIs their apps and backends use, associating authentic phone activity with user accounts. You already get most of the way there by supporting iCloud login, which should illuminate to you what you are really asking for is to play outside of Apple's ecosystem, a totally different ask.

    • tim333 12 hours ago

      There is the much slagged off but maybe effective Worldcoin.

      • LoganDark 6 hours ago

        Nothing like this or Worldcoin ever will be useful in any capacity for qualifying non-fraudsters, because fraudsters will have an infinite supply from people they tricked while non-fraudsters will only have what they've personally been given. So it'll basically do the opposite of what you want.

        • tim333 an hour ago

          Worldcoin IDs are one per person and I've not heard of people tricked to give them away. Some have been bought for cash but there's a limit to how many of those will be available. In practice they are no good for verifying humans on blogs and the like though because only about 0.0003% of humans have one. But maybe something a bit like that that's easier to get?

    • akoboldfrying 10 hours ago

      >Possible social fix: Put in a fake token, if the site "accepts" then you know it's misbehaving.

      IIUC the tokens would need to be cheaply verifiable by anyone as authentically issued, so a fake token would never be accepted (or if it somehow was, it would only tell you that the acceptor is fantastically lazy/incompetent).

      I think that that verifiability, plus a guarantee that tokens will not be spent twice, plus visibility of all transactions, suffice: Then anyone can check the public ledger x minutes after they spent their and verify that the acceptor sent it straight to the burn address after receiving it. IOW, blockchain suffices. OTOH, it would be nice not to have to need the public ledger.

      • Terr_ 10 hours ago

        I think blockchain is (as usual) the wrong tool for the job here, since it would dramatically increase code/bugs/complexity/onboarding-cost while also introducing new privacy risks. After all, we're already trusting that a given attestor/issuer isn't just handing out tokens willy-nilly.

        By comparison, here's a simpler "single HTTP call" approach, where a site like HN makes a POST to the issuer's API, which would semantically be like: "Hey, here is a potential token T and a big random confirmation number C. If T is valid, burn it and record C as the cause. Otherwise change nothing. Finally tell me whether-or-not that same T was burned in the last 7 days along with the same C that I gave."

        The benefits of this approach are:

        1. The issuer just has to maintain a list of surviving tokens and a smaller short-lived list of recent (T,C) burning activity, and use easy standard DB transactions to stop conflicts or double-spending.

        2. All the social-media site has to do is create a random number C for burning a given T, and temporarily remember the pair until it gets a yes-or-no answer.

        3. A malicious social-media site cannot separate testing the token from spending it on a legitimate site, which deters a business model of harvest-and-resale. However it could spend it immediately for its own purposes, which is worth further discussion.

        4. The idempotent API call resists connection hiccups and works for really basic retry logic, avoiding "wasted" tokens.

        5. The issuer doesn't know how or where a given token is being used, beyond what it can infer from the POST request source IP. It certainly doesn't know which social-media account it just verified, unless the two sites collude or the social-media site is stupid and doesn't use random C values.

        • akoboldfrying 2 hours ago

          This is going in the right direction, but you identified the acceptor double-spend problem.

          What about if, instead of the spender handing the token directly to the acceptor, the spender instead first makes an HTTP "I want to spend token 456" request to the issuer, which replies with a "receipt" that the spender then sends to the acceptor, which in turn sends a "If the token associated with this receipt is not yet burnt, burn it, record C next to it and report OK, otherwise if it was already recently burnt using C also report OK (for idempotence), otherwise (if it was already burnt with some other C') report FAIL" request to the issuer. The receipt not being valid as a spendable token cuts out the double-spend issue, at the cost of one extra HTTP request for the spender.

  • nkrisc an hour ago

    There's a nearly fool-proof solution: manually verify every submission.

    You can use automated systems as a first line of defense against spam, and then hire people to manually verify every submission that makes it through. You can even use that as opportunity to ensure a certain quality of submission, even if it was submitted by a person.

    Any legitimate submissions that get caught in the initial spam filter can use a manual appeal process (perhaps emailing and pleading their case which will go into a queue to be manually reviewed).

    Sure, it's not necessary easy and submissions may take some time to appear on the site, but there would be essentially zero spam and low-quality content.

    • bitwizeshift 6 minutes ago

      The article never talked about bot-generated products, only bot generated comments and upvotes. How does manual review address this exactly?

    • qup 13 minutes ago

      The bots are commenting and voting, not submitting products.

  • animal531 14 minutes ago

    Bots can apparently now beat 100% of road sign captchas, so unless if you can cycle them around its not going to do much.

  • throwaway48476 3 hours ago

    The problem is that website owners want to have their cake and eat it too. They want to make data public but not so public that it can be copied. It's the same problem as DRM which doesn't work. It's an inherent contradiction.

    Web devs also bloat the hell out of sites with MB of Javascript and overcomplicated design. It would be far cheaper to just have a static site and use CDN.

    • kamray23 an hour ago

      That'd be a nice way of looking at it, if serving content was cheap. It is not. I want to put my CV online, but I'm not willing to shill out tens of thousands every year to have it scraped for gigabytes per day. Doesn't happen, you say? Didn't before, definitely. Now there's so many scrapers building data sets that I've certainly had to block entire ranges of IPs due to repeated wasting of money.

      It's like the classic "little lambda thing" that someone posts on HN and finds a $2 million invoice in their inbox a couple weeks later. Except instead of going viral your achievements get mulched by AI.

      • throwaway48476 39 minutes ago

        Gigabytes? How big is your CV?

        >lambda thing

        I never understood why anyone thought this was ever a good idea. QoS is a natural rate limit for non critical resources.

    • imiric 2 hours ago

      CAPTCHAs can protect public resources as well. But the main problem here is about preventing generated spam content, not scraping. This can be mitigated by placing CAPTCHAs only on pages with signup/login and comment forms.

      • throwaway48476 2 hours ago

        >This can be mitigated by placing CAPTCHAs only on signup/login and comment forms.

        If only...

        Algorithmic turing test is a more interesting problem. https://xkcd.com/810/

        Traditional captcha solving ability has already been surpassed by the bots which is now why there's so many new and creative different CAPTCHAs. Until someone trains a model to solve them that is.

  • m463 12 hours ago

    I wonder if this is like the recent article about people not buying from locked display cabinets:

    https://news.ycombinator.com/item?id=41630482

    how many humans does captcha send away?

    • imiric 11 hours ago

      You're thinking of the traditional CAPTCHA that requires an active effort from the user. Those do present a barrier to entry where some users give up.

      But there's a new breed of them that work behind the scenes and are transparent to the user. It's likely that by the time the user has finished interacting with the form, or with whatever is being protected, that the CAPTCHA has already determined whether the user is a bot or not. They only block the action if they have reasons to suspect the user is a bot, in which case they can show a more traditional puzzle. How effective this is depends on the implementation, but this approach has received good feedback from users and companies alike.

  • class3shock 11 hours ago

    As someone that already often runs into them due to vpn use being flagged please no more. Think about how much human time has been wasted on these things.

    • imiric 11 hours ago

      The detection rules ultimately depend on each site. If you're using a VPN known to be the source of bot activity, you're likely being grouped with that traffic. Ideally the detection can be sophisticated enough to distinguish human users from bots based on other signals besides just their source IP, but sometimes this is not the case.

      All these usability issues are solvable. They're not a reason to believe that the problem of distinguishing bots from humans can't be approached in a better way.

    • fryry 6 hours ago

      I was researching CAPTCHA solving services yesterday that run on cheap 3rd world labour. I couldn't imagine a worst job than solving them all day.

      • throwaway48476 3 hours ago

        It's an inherently repetitive task which makes it easy to automate with ML without requiring anything like AGI.

  • capitainenemo 8 hours ago

    Annoyingly these captchas that apparently safeguard user privacy make websites completely unusable when using Firefox fingerprinting protection.

    • imiric 6 hours ago

      I'm not claiming that these systems are perfect. Those edge cases should be resolved.

      But if we want the internet to remain usable, our best chance is to fight back and improve our bot detection methods, while also improving all the other shortcomings people have associated with CAPTCHAs. Both are solvable technical problems.

      The alternatives of annoying CAPTCHAs that don't work well, or no protection at all, are far worse in comparison.

catinblack 13 hours ago

When I posted my product on producthunt (and that was about 5 years ago) I got dozens of props with a first place guarantee. Literally an hour after posting, I was bombarded with messages. Now it's probably even worse.

nnurmanov 9 hours ago

The question is who is on PH? Customers? I doubt it. Indiehackers? Probably? Who are we selling to? Is there a point to even launch on PH?

  • paraknight 3 hours ago

    This is an excellent point. We launched on PH and reached #1 of the day and #1 of the week. We barely got any new customers, but we did get a lot of inbound investor interest. I'd say that if you're fundraising, it's worth it, but otherwise you need to go to where your customers are (ours were not PH users).

stevage 9 hours ago

It's problematic doing this analysis that starts with your own ad-hoc categorisation of whether a user is a bot or not, which you have no way of validating. If that categorisation is wrong, then all the analysis is wrong.

I noticed in particular this:

> In late 2022, bot comments really took off... around the same time ChatGPT was first widely available.

But remember that one aspect of the categorisation is:

> Did you know ChatGPT generated comments have a higher frequency of words like game-changer? Bot comments also contained characters not easily typeable, like em-dash, or the product’s name verbatim even when it’s very long or contains characters like ™ in the name.

So...he categorises users as bots if they behave like ChatGPT, and then thinks he has found something interesting when the number of users that behave like that goes up after ChatGPT was released. But it's also possible there were already lots of bots before that, they just used different software that behaves differently so he doesn't detect it.

  • kelnos 8 hours ago

    True, but if his categorization of ChatGPT-using bots is correct, I think it's at least notable to see that ChatGPT-generated comments taking off was/is actually a thing. And if the categorization of ChatGPT-generated comments is correct, it's notable that -- even if he's undercounting all bots (including those not using ChatGPT) -- bot-generated comments have far outstripped the number of real-person-generated comments.

    Of course, like you say, this is quite a few "ifs". If the assumptions I'm making don't hold, neither does the conclusion.

    • klipt 7 hours ago

      As LLMs have become more realistic, Dead Internet Theory is actually coming true. (Maybe not the government controlled part, but the bot generated content part is still concerning)

  • sublimefire 2 hours ago

    The post starts with a prompt injection test. The premise is set with an evidence. Suggest alternative categorisation as otherwise your comment seems to be made in bad faith and is unhelpful.

  • throwaway48476 3 hours ago

    Such statistical methods can be accurate for determining whether a comment section is full of bots but much less accurate for determination if any one particular comment is a bot.

oaklander 19 hours ago

Since I know you personally, I know how much work you put into this and it shows. Nicely done

  • welder 19 hours ago

    Thanks Siri! Yep back to normal work now.

sinuhe69 8 hours ago

In the old time, we had a web of trust (WOT) to vote for websites. Can a web of trust for humans help fighting bots?

I imagine I can vouch for a dozen of accounts that they are indeed human. Similarly, other people can vote for me, and so we can build a web of trust. Of course, we will need seeds, but they can be verified accounts or relatively easily established through social media connections and interactions.

I think X and Meta know for quite sure which accounts are bots. But they do not seem interested in offering this knowledge as a service.

  • Jerrrrrrry 5 hours ago

    those get swept and derailed for national convenience purposes

    • jumping_frog 2 hours ago

      Governments can have their "Deep State".

      Social Media should have "Surface State" to save the humanity from Deep State. Interconnected group of vigilante people trying to reveal disguised deep state nefarious intentions before those are put into action.

amiantos 15 hours ago

I have a couple posts on reddit that didn't receive a lot of comments but every week or so it'll get a comment that is some GPT-powered bot going, "<topic of post on reddit>? Wow! That's really thought provoking, I wonder about why <topic of post on reddit> is important," and so on, asking me very obvious questions in an attempt to get me to feed the system more data.

I wouldn't be surprised to find out these bots are actually being run by reddit to encourage engagement.

  • akincisor 14 hours ago

    See the history of Reddit. It was manually curated sock puppets before bots were viable, and now that bots are viable, I strongly believe the bulk of comments and posts in the popular subreddits are bots (and many are run by reddit themselves).

    • jumping_frog 2 hours ago

      One doesn't even need to use llms. I caught a bot which was cross-pollinating comments from youtube's comment section into reddit when that youtube video was shared on reddit.

      How about a totally fake social media populated by llm bots and rake in VC moolah?

mulhoon 19 hours ago

Somebody needs to sell t-shirts “me good LLM”

  • welder 13 hours ago

    I'll print some, you can pick them up for free in SF. Hoodies and tees.

    • jenny91 12 hours ago

      How will my bot buy one if it has to turn up in person.

      • welder 12 hours ago

        Soon the bot will send a human to pick it up for them.

  • novoreorx 18 hours ago

    I will wear it everytime I meet AI investors

dom96 19 hours ago

I expect that nowadays many online are speaking with GenAI without even realising it.

It’s already been bad enough that you may be unknowingly conversing with the same person pretending to be someone else via multiple accounts. But GenAI is crossing the line in making it really cheap for narratives to be influenced by just building bots. This is a problem for all social networks and I think the only way forward is to enforce validation of humanity.

I’m currently building a social network that only allows upvote/downvote and comments from real humans.

  • wavemode 12 hours ago

    I agree with the other commenters that you'll face a lot of challenges, but personally I hope you succeed. Sounds like an idea worth pursuing.

    • dom96 11 hours ago

      Thank you :)

  • pixl97 19 hours ago

    >I’m currently building a social network that only allows upvote/downvote and comments from real humans.

    And how exactly do you do that? At the end of the day there is no such thing as a comment/vote from a real human, it's all mediated by digital devices. At the point the signal is digitized a bot can take over the process.

    • dom96 19 hours ago

      By validating each account with a passport and ensuring that only one account per passport is allowed.

      • SoftTalker 19 hours ago

        Aside from the practical and technical problems you're greatly limiting your audience. Most people who don't travel internationally don't have a passport. In Europe this might be a smaller number but in the USA and Canada I would guess this is a majority of people. Non-citizens won't have a passport. Most young adults will not have one. Many older people will have let theirs expire.

        • dom96 18 hours ago

          Yeah, that's the challenge. My bet is that there are enough people out there with passports to make an interesting social network.

          Of course, getting someone to share their passport will be another filter. But I hope that I can convince people that the benefits are worth it (and that I will be able to keep their data safe, by only storing hashes of everything).

          • pixl97 18 hours ago

            Ok, so you get some critical amount of 'humans' to share "a" passport. Now you've built an expensive and high quality lead finder for scammers/spammers. You've increased the value floor for people submitting fake passports. Also, how are you paying for verification of the passports?, VC money to get from the loss phase to the making enough to support itself on ads? How are you dealing with government data requests, especially in the case where said data can be attributed to a real human?.

            Maybe I'm wrong, but just a social network of 'real people' doesn't seem like enough in itself. What is going to bring people there with the restrictions and potential risks you're creating.

            • dom96 17 hours ago

              You could very well be right. I'm willing to give it a shot and see.

              All I can say is that I personally see huge value in a social network for real people. Personally I am sick of arguing with what are likely legions of duplicate accounts/bots/russian trolls online. I want some reassurance that I'm not wasting my time and am actually reaching real humans in my interactions online.

              Success to me is 1000 MAU. There are companies out there that do passport verification for a reasonable fee with a fair amount of free verifications to start with (which will handle 1000 MAU just fine). If the number of users wishing to take part is significantly higher then I will explore either ads or charging a small fee during registration.

              I'm still very far from needing to cross that bridge though. Same for some of the other questions you've raised. I'd have to do a lot more research to come to a solid stance of what to do when government data requests come in. But I would guess that there isn't much choice but to abide by the request. If you want true anonymity from the government then this place will not be for you (but then I'd say not many social networks are for you in that case)

              • throwaway48476 3 hours ago

                I would stay the hell away from any site that wants a copy of my passport.

                As for a new turing test there is a magic word that bots are not allowed to use that guarantees you're talking to a human.

                • dom96 2 hours ago

                  There is no such word. Even if there is it would be trivial to create a bot that avoids this.

                  Passports are the only practical way to ensure that I’m talking to a human. Do you have a better idea?

                  • throwaway48476 2 hours ago

                    If I think you're a bot I'd ask you to say the word. The word exists and it works every time. No one is going to train a foundation model just to say the word, it's too expensive.

                    Prompt injection can be mitigated but not this prompt rejection.

              • ddtaylor 11 hours ago

                I see a lot of value in a network where I can be confident I am talking to real people.

                As a user I can't do anything related to the passport stuff and I know many people who likewise wouldn't be interested in doing that, because we live in the states. A more "normal" approach here would be to use one of the government ID verification systems for your state ID. Most of us are willing to expose that information, since it's what you show when you go to the store to prove your age/identity.

                • dom96 2 hours ago

                  I’ll definitely look into this. But starting with them would limit me to the US only (I’m sure other countries have similar things, integrating each would take a lot of work though). I want to start with passports because they are a true multi national document.

      • lobsterthief 19 hours ago

        What about people who buy stolen passports on the dark web? Or passport details that get leaked in data breaches

        • mike_hearn 13 hours ago

          Passports have digitally signed certificates in them, readable by any device with an NFC radio. It's easy enough to extract that data with a mobile app, and now you have an unforgeable file that can be hashed. Of course whether users will bother to sign up for something like a social network if they have to go rummage around and find a passport, install a mobile app etc, I don't know. But it's technically possible and I've proposed such a scheme years ago. For bonus points do the hashing inside a secure enclave so the ePassport data is never upload to a remote server in the clear.

        • unethical_ban 19 hours ago

          We need a German to chime in with whatever word describes this scenario, when someone suggests an action is not worth doing because of corner cases or inability to perfectly execute a process.

          In this hypothetical, let's say we'd tackle the dark web passport market issue when we get there.

          • dom96 18 hours ago

            hehe yeah, it's funny how many are focusing on the small edge case that makes a certain solution not 100% perfect.

            There is also another issue: people can have more than one passport if they're dual citizens. But you know what... I think that's fine.

      • n_ary 18 hours ago

        How do I know that, you will handle my passport data with care? Banks I can trust(despite numerous leaks), you as a random social media or online service with zero regulation, I won’t. Plus this opens up immense ways to sue you for collecting unnecessary data and personal information, unless you are massive and have an army or lawyer or have a KYC requirement.

        • dom96 17 hours ago

          I plan to outline exactly what data I store. I don't plan to store raw passport number/name details, rather a hash so I can verify the same passport isn't used twice.

          So even if the DB leaks no one (except the government) will be able to tie your real life identity to the account.

          • ddtaylor 11 hours ago

            In the US there are a lot of companies that do these kinds of identity verification for you. I had to do one when I was setting up a new Steam Developer account and many use them for regular day-to-day verification like for work or when submitting important government requests.

            It would seem a lot better to just partner with an existing company that takes care of that part of identity verification. Your job is still to compose all of these signals and metrics and distill them into a simple "everyone is human" network, but the actual job of being a passport jockey can be avoided IMO.

            • dom96 2 hours ago

              Yep. I’ve considered doing it myself but it would require setting up an app for iOS and Android. For now relying on existing services will be fine. If this takes off I might set up my own passport verification service.

      • cynicalpeace 18 hours ago

        No passport, but perhaps face picture that is also encoded with on device verifiable token.

        I think there is actually a use case for blockchain (don't pile on!) for this. I have a vague idea of a unique token that goes on every camera and can be verified from the blockchain. If a picture has the token you know it's real... like i said, it's vague idea but i think it's coming

        • dom96 17 hours ago

          No need for blockchain. All you need is this: https://contentauthenticity.org/.

          The problem with this is that it's still easy to forge.

          I'll certainly consider playing with ways to identify human uniqueness that don't require passports. But passports are the most obvious route to doing so.

      • gregw134 14 hours ago

        Why not require a non-voip phone number instead of passport?

        • dom96 13 hours ago

          SIM cards are as cheap as £1 where I live. £1 per account is nothing and I wouldn't be surprised if there are ways to get unique numbers for even less.

          • ddtaylor 11 hours ago

            There are lots of ways to get verification codes for any number of phones for absurdly cheap. They run large farms (or hacked devices or shady apps, who knows) and can provide a service where you get hundreds of SMS verifications for a few bucks a month.

      • kjkjadksj 19 hours ago

        Why would I ever give my passport to your website or anyone elses?

        • dom96 18 hours ago

          Because it enables you to be a part of a social media that is guaranteed to be majority human-centred. Why wouldn't you give your passport to a website? Don't you already do so for crypto or other banking?

      • metalliqaz 19 hours ago

        who is going to upload their passport to use social media?

        • dom96 18 hours ago

          People already upload their passport for lots of things, why not social media?

          • metalliqaz 17 hours ago

            > People already upload their passport for lots of things

            I sure don't.

            > why not social media?

            privacy?

            • dom96 17 hours ago

              For me, sacrificing some privacy is worth it to be in a community with users who I know are real people. If that's not something that is important to you then that's fine.

              In my case, right now it's very easy for you to figure out my real name by just googling my nickname. Registering on a website like the one I am implementing won't sacrifice much more of my privacy.

  • stevage 9 hours ago

    >I’m currently building a social network that only allows upvote/downvote and comments from real humans.

    What's your method for detecting real humans?

    Also will your social network allow bots labelled as bots?

    • dom96 2 hours ago

      See discussion above for details, but in short: by requiring user passports.

      Yeah, I’ll probably make it possible to set up bots that are clearly labelled.

  • VyseofArcadia 19 hours ago

    Good bot (/s)

    I don't know that "real humans" is good enough. You can do plenty of manipulation on social networks by hiring lots of real humans to upvote/downvote/post what you want. It's not even expensive.

    • dom96 19 hours ago

      Yeah. But the cost is significantly higher than ramping up GenAI to make you thousands of accounts.

      There is no fool proof solution to this. But perfect is the enemy of the good. Right now social media is pretty far from being “good”.

delichon 18 hours ago

I have a year old lurker account on X. I've never made a single comment with it. But 35 attractive women are now following me. Zero men, zero unattractive women. I doubt that it is the result of the animal magnetism of my likes.

It's a microcosm of the whole darned web.

  • EasyMark 18 hours ago

    I blocked a bunch of those accounts and the number of rando hot girls trying to get me to follow them dropped really quickly. You may give that a try. Maybe they have some very rough algo that does attempt to stop those follow varieties that you block a lot of

    • sdenton4 15 hours ago

      They're trying to simulate real accounts by following people. Blocks are likely a strong signal for anti spam. So if the spammers notice that you block bots, they won't want to burn accounts by following you, and will instead follow accounts that don't block bots.

    • bryanrasmussen 17 hours ago

      probably the rando hot girls are all run by the same big bot farms, or bot farms sell suckers lists so if you are a sucker for one rando hot girl bot the others soon find out.

  • 93po 18 hours ago

    I feel like looking at this sort of behavior would make it really easy to spot bot accounts. I have the same thing happening on my account.

dewey 19 hours ago

PH has always been a weird place, the comments are always Linkedin level of boring (Copy paste positivity about basically every product) and it always felt like people were just commenting there to funnel people to their own profile.

  • WD-42 19 hours ago

    I feel the same. I’m not surprised by this, a sleazy site is going to attract sleazy actors. Me good llm.

tomalaci 19 hours ago

This is pretty much progress on dead internet theory. The only thing I think that can stop this and ensure genuine interaction is with strong, trusted identity that has consequences if abused/misused.

This trusted identity should be something governments need to implement. So far big tech companies still haven't fixed it and I question if it is in their interests to fix it. For example, what happens if Google cracks down hard on this and suddenly 60-80% of YouTube traffic (or even ad-traffic) evaporates because it was done by bots? It would wipe out their revenue.

  • brookst 19 hours ago

    > It would wipe out their revenue.

    Disagree. YouTube's revenue comes from large advertisers who can measure real impact of ads. If you wiped out all of the bots, the actual user actions ("sign up" / "buy") would remain about the same. Advertisers will happily pay the same amount of money to get 20% of the traffic and 100% of the sales. In fact, they'd likely pay more because then they could reduce investment in detecting bots.

    Bots don't generate revenue, and the marketplace is somewhat efficient.

    • mbesto 14 hours ago

      > YouTube's revenue comes from large advertisers who can measure real impact of ads.

      Not necessarily. First, attribution is not a solved problem. Second, not all advertisement spend is on direct merchandising, but rather for branding/positioning where "sign up" / "buy" metrics are meaningless to them.

    • Veuxdo 19 hours ago

      > In fact, they'd likely pay more because then they could reduce investment in detecting bots.

      A lot more. Preventing bots from eating up your entire digital advertising budget takes a lot of time and money.

      • speed_spread 14 hours ago

        Making advertising more efficient would also open up opportunities for smaller players. Right now only the big guys have the chops to carpet-bomb the market regardless of bots. Noise benefits those who can afford to stand above it.

        • jumping_frog an hour ago

          Pay per impression is a metric adexchanges use. So it is not in best interest of companies to remove bots. Bots create "economic activity" when such metrics are used.

    • netcan 19 hours ago

      Yes... but maybe also no. Well measured advertising budgets are definitely part of the game. But so are poorly measured campaigns. Type B often cargo cult A. It's far from a perfect market.

      In any case, Adwords is at this point a very established product... very much an incumbent. Disruption generally, does not play to their favor by default.

    • nitwit005 11 hours ago

      Advertisers have spent decades pressing Google to do something about fraudulent ad clicks/views, and occasionally tried to sue Google over being billed for the fraud.

    • drawkward 11 hours ago

      Advertisers would have to pay far less, because of fewer fake impressions. Furthermore, advertising would seem to be more effective, since bots don't buy the product. The publishers, however, would hate it.

      The problem is, the bots seem like a scam perpetrated by publishers to inflate their revenue.

    • kibwen 12 hours ago

      What on Earth has given so many people in this thread the confidence to assert that marketing departments actually have any real way to gauge the effectiveness of a given ad campaign? It's effectively impossible to adjust for all the confounding variables in such a chaotic system, so ad spend is instead determined by internal politicking, pseudoscientific voodoo, and the deftness of the marketing department's ability to kiss executive ass. This ain't science, it's perversely-incentivized emotion.

      • dotancohen 3 hours ago

        The fact that sales go up in correlation to when marketing campaigns are run.

  • mike_hearn 13 hours ago

    Well you're about to find out, because YouTube is doing a massive bot/unofficial client crackdown right now. YTDL, Invidious etc are all being banned. Perhaps Google got tired of AI competitors scraping YouTube.

    In reality, as others have pointed out, Google has always fought bots on their ad networks. I did a bit of it when I worked there. Advertisers aren't stupid, if they pay money for no results they stop spending.

    • LorenPechtel 9 hours ago

      I've found the opposite. These days I absolutely can't play a video in YouTube, it always comes back as not available. But I can use JDownloader to grab it unless there are language issues. (It unfortunately might not get the right language for the audio.)

      • mike_hearn 2 hours ago

        JDownloader is also having issues:

        https://board.jdownloader.org/showthread.php?t=48894&page=16...

        but it seems like YT have various rules for when they do and don't trigger bans. Also this is a new change which they usually roll out experimentally, and per client at that. So the question is only how aggressive do they want to be. They can definitely detect JDownloader as a bot, and do.

    • internet101010 13 hours ago

      yt-dlp works just fine for me. Or are you saying that are limiting those that do downloads in bulk?

      • mike_hearn 13 hours ago

        Probably the latter. yt-dlp can be detected and it yields account / IP bans, it seems. They've been going back and forth around the blocks for weeks but only by claiming to be different devices, each time they do the checks are added for the new client and they have to move onto the next. There's a finite number of those.

        Here's a comment from Invidious on the matter:

        https://github.com/iv-org/invidious/issues/4734#issuecomment...

  • joseda-hg 19 hours ago

    This still breaks some parts of the internet, where you wouldn't want to associate your identity with your thoughts or image

    • cryptonector 17 hours ago

      Think attribute certificates.

      • AnthonyMouse 12 hours ago

        There are only two real ways to implement that. One is the "attribute certificate" is still tied to your full identity and then people won't be willing to associate them. The other is that the attribute certificates are fully generic (e.g. everyone over 18 gets the same one) and then someone will post it on the internet and, because there is no way to tie it back to a specific person, there is no way to stop them and it makes the system pointless.

        • cryptonector 5 hours ago

          Correct. In practice the latter isn't really possible because the issuer can always record the subject public key info, or the serial number, or a hash of the certificate, and they can then use that to identify the real subject. However for low-value things I might use them.

          • AnthonyMouse 3 hours ago

            No, you can do the latter. You literally have a secret that implies the bearer meets the particular characteristic (e.g. is over 18). They don't each get their own certificate, they all get the exact same one down to the last byte, so you can't correlate it with anything other than the group of people who are over 18.

            But then there's nothing stopping any of them from sharing the secret with people outside the group.

  • pilgrim0 19 hours ago

    I think on the same lines. Digital identity is the hardest problem we’ve been procrastinating in solving since forever, because it has the most controversial trade offs, which no two persons can agree on. Despite the well known risks, it’s something only a State can do.

  • cryptonector 17 hours ago

    > This trusted identity should be something governments need to implement.

    Granting the premise for argument's sake, why should governments do this? Why can't private companies do it?

    That said, I've long thought that the U.S. Postal Service (and similarly outside the U.S.) is the perfect entity for providing useful user certificates and attribute certificates (to get some anonymity, at least relative to peers, if not relative to the government).

    The USPS has:

      - lots of brick and mortar locations
      - staffed with human beings
      - who are trained and able to validate
        various forms of identity documents
        for passport applications
    
    UPS and FedEx are also similarly situated. So are grocery stores (which used to, and maybe still do have bill payment services).

    Now back to the premise. I want for anonymity to be possible to some degree. Perhaps AI bots make it impossible, or perhaps anonymous commenters have to be segregated / marked as anonymous so as to help everyone who wants to filter out bots.

    • throwway120385 14 hours ago

      I used to think that, but recently had a really bad experience with a lot of runaround with them when we had to have our mail held for a few weeks while we sorted out a mailbox break-in. We would go to one post office that was supposed to have our mail and be told to go to another post office, then get redirected back to the first post office multiple times. And they kept talking about how they had to work out the logistics and everything was changing over and over. Some of the managers seemed to give my wife the wrong information to get rid of her.

      There were a few managers who tried to help and eventually we got our mail but the way everything worked out was absurd. I think they could handle national digital identity except that if you ever have a problem or need special treatment to address an issue buckle up because you're in for a really awful experience.

      The onboarding and day-to-day would probably be pretty good given the way they handle passport-related stuff though.

    • rurp 13 hours ago

      > why should governments do this? Why can't private companies do it?

      A private company will inevitably be looking to maximize their profit. There will always be the risk of them enshittifying the service to wring more money out of citizens and/or shutting it down abruptly if it's not profitable.

      There's also the accountability problem. A national ID system would only be useful if one system was widely used, but free markets only function well with competition and choice. It could work similar to other critical services like power companies, but those are very heavily regulated for these same reasons. A private system would only work if it was stringently regulated, which I don't think would be much different from having the government run it internally.

      • AnthonyMouse 12 hours ago

        > A national ID system would only be useful if one system was widely used, but free markets only function well with competition and choice.

        Isn't this also a problem with having the government do it? E.g. it's supposed to prevent you from correlating a certification that the user is over 18 with their full identity, but it's insecure and fails to do so, meanwhile the government won't fix it because the administrative bureaucracy is a monopoly with limited accountability or the corporations abusing it for mass surveillance lobby them to keep the vulnerability.

        • throwaway48476 3 hours ago

          The government would never create a national ID system because it would break the current economic model.

        • jprete 9 hours ago

          The government at least doesn't have a direct incentive to screw it up and can be pressured by a lot more groups than just industry lobbyists.

          • AnthonyMouse 3 hours ago

            Don't they? If they promise privacy and then don't deliver it, there are a lot of government agencies and politicians that would be championing the new tool for rooting out crimethink.

      • internet101010 13 hours ago

        It could be done similar to how car inspections are done in Texas: price is set statewide, all oil change places do the service, and you redeem a code after.

        The problem with this though is the implications of someone at whatever the private entity is falsely registering people under the table - this would need to be considered a felony in order for it to work.

    • consteval 16 hours ago

      I think the main argument for having the government do it as opposed to the private sector is that the gov has a lot more restrictions and we, the people, have a say. At least theoretically.

      Imagine if Walmart implemented an identity service and it really took off and everyone used it. Then, imagine they ban you because you tweeted that Walmart sucks. Now you can't get a rental car, can't watch TV, maybe can't even get a job. A violation of the first amendment in practice, but no such amendment exists for Walmart.

      • cryptonector 15 hours ago

        We're already there. Apple and Google know who we all are because we had to pay for our devices.

        The government has no real restrictions.

  • datadrivenangel 19 hours ago

    You would assume that Advertising companies with quality ad space would be able to show higher click through rates and higher impression to purchase rates -- overall cost per conversion -- by removing bots that won't have a business outcome from the top of the funnel.

    But attribution is hard, so showing larger numbers of impressions looks more impressive.

    • carlosjobim 14 hours ago

      Attribution is extremely easy, it is a solved problem.

      Companies keep throwing away money on advertising for bots and other non-customers because they either:

      A) Are small businesses where the owner doesn't care about what he's doing and enjoys the casino like experience of buying ads online and see if he gets a return, or

      B) Are big businesses where the sales people working with online ads are interested in not solving the problem, because they want to keep their salaries and budget.

  • jenny91 12 hours ago

    > This trusted identity should be something governments need to implement.

    I have been thinking about this as well. It's exactly the kind of infrastructure that governments should invest in to enable new opportunities for commerce. Imagine all the things you could build if you could verify that someone is a real human somehow with good accuracy (without necessarily verifying their identity).

  • paulnpace 10 hours ago

    Governments can solve technical problems no one else can?

  • nxobject 19 hours ago

    I think that's also part of Facebook's strategy of being as open with llama as possible – they can carve out the niche as the "okay if we're going to dive head first into the dead internet timeline, advertisers will be comforted by the fact that we're a big contributor to the conversation on the harms of AI – by openly providing models for study."

    • jumping_frog an hour ago

      I think Nvidia should publicly declare that they will continue to build (even if Meta decides to stop) open llms so that their hardware is sold. Give away the software so that hardware gets sold. Similar to Google gives away Android to OEMs.

  • zackmorris 11 hours ago

    The real problem is how to prove identity while also guaranteeing anonymity.

    Because Neo couldn't have done what he did by revealing his real name, and if we aren't delivering tech that can break out of the Matrix, what's the point?

    The solution will probably involve stuff like Zero-Knowledge Proofs (ZKPs), which are hard to reason about. We can imagine a future where all user data is end-to-end encrypted, circles of trust are encrypted, everything runs through onion routers, etc. Our code will cross-compile to some kind of ZKP VM running at some high multiple of computing power needed to process math transactions, like cryptocurrency.

    One bonus of that is that it will likely be parallelized and distributed as well. Then we'll reimplement unencrypted algorithms on top of it. So ZKP will be a choice, kind of like HTTPS.

    But when AI reaches AGI in the 2040s, it will be able to spoof any personality. Loosely that means it will have an IQ of 1000 and beat all un-augmented humans in any intellectual contest. So then most humans will want to be augmented, and the arms race will quickly escalate, with humanity living in a continuous AR simulation by 2100.

    If that's all true, then it's basically a proof of what you're saying, that neither identity nor anonymity can be guaranteed (at least not simultaneously) and the internet is dead or dying.

    So this is the golden age of the free and open web, like the wild west. I read a sci fi book where nobody wore clothes because with housefly-size webcams everywhere, there was no point. I think we're rapidly headed towards realtime doxxing and all of the socioeconomic eventualities of that, where we'll have to choose to forgive amoral behavior and embrace a culture of love, or else everyone gets cancelled.

    • LorenPechtel 8 hours ago

      I don't even think it's possible. Same as humans at captcha farms.

      Also, consider the NPD breach. What happens when that database of humans gets compromised as it most certainly will someday?

  • gregw134 14 hours ago

    What's best practice for preventing bot abuse, for mere mortal developers? Would requiring a non-voip phone number at registration be effective?

    • AnthonyMouse 12 hours ago

      There is no such thing as a "non-VoIP phone number". All phone numbers are phone numbers. Some people try to ban blocks assigned to small phone providers, but some actual humans use those. Meanwhile major carriers are leasing numbers to anyone who pays from the same blocks they issue to cellular customers. Also, number portability means even blocks don't mean anything anymore.

      Large companies sometimes claim to do this "to fight spam" because it's an excuse to collect phone numbers, but that's because most humans only have one or two and it serves as a tracking ID, not because spammers don't have access to a million. Be suspicious of anyone who demands this.

    • spacebanana7 12 hours ago

      If it’s really important to you then use Apple / Google / GitHub login.

      Obviously this has many downsides, especially from a privacy perspective, but it quickly allows you to stop all but the most sophisticated bots from registering.

      Personally I just stick my sites behind Cloudflare until they’re big enough to warrant more effort. It prevents most bots without too much burden on users. Also relatively simple to move away from.

      • gregw134 12 hours ago

        Does that really work? I'm trying to build a site with upvotes--wouldn't it be really easy for someone with 100 bought Google accounts to make 100 accounts on my site?

        • PokestarFan 10 hours ago

          Google is working hard to make it so you shouldn't be able to easily make new accounts. New accounts basically require a phone number and you can only use one phone number so many times before they won't let you use that phone number any more times. Grandfathered accounts don't have this problem yet so this is why Google is trying to crack down on long-unused accounts.

          • LorenPechtel 8 hours ago

            And, annoyingly, being a little too aggressive about it.

            Google apparently decided my wife's gmail account was unused. The mail part was other than some forwarding rules (she lives on WeChat, not email.) She's been consistently logged in with YouTube and Translate, though--and now the only way I can get Translate to work is by logging her out.

    • changing1999 12 hours ago

      Unfortunately, every anti-bot feature also harms real people. As a voip user, I wouldn't be able to sign up for your app.

  • bityard 18 hours ago

    I've been thinking about how AI will affect ad-supported "content" platforms like YouTube, Facebook, Twitter, porn sites, etc. My prediction is that as AI-generated content improves in quality, or at least believability, they will not prohibit AI-generated content, they will embrace it whole-heartedly. Maybe not at first. But definitely gradually and definitely eventually.

    We know that these sites' growth and stability depends on attracting human eyeballs to their property and KEEPING them there. Today, that manifests as algorithms that analyze each person's individual behavior and level of engagement and uses that data to tweak that user's experience to keep them latched (some might say addicted, via dopamine) to their app on the user's device for as long as possible.

    Dating sites have already had this down to a science for a long time. There, bots are just part of the business model and have been for two decades. It's really easy: you promise users that you will match them with real people, but instead show them only bots and ads. The bots are programmed to interact with the users realistically over the site and say/do everything short of actually letting two real people meet up. Because whenever a dating site successfully matches up real people, they lose customers.

    I hope I'm wrong, but I feel that social content sites will head down the same path. The sites will determine that users who enjoy watching Reels of women in swimsuits jump on trampolines can simply generate as many as they need, and tweak the parameters of the generated video based on the user's (perceived) preferences: age, size, swimsuit color, height of bounce, etc. But will still provide JUST enough variety to keep the user from getting bored enough to go somewhere else.

    It won't just be passive content that is generated, all those political flamewars and outrage threads (the meat and potatoes of social media) could VERY well ALREADY be LLM-generated for the sole purpose of inciting people to reply. Imagine happily scrolling along and then reading the most ill-informed, brain-dead comment you've ever seen. You know well enough that they're just an idiot and you'll never change their mind, but you feel driven to reply anyway, so that you can at LEAST point out to OTHERS that this line of thinking is dangerous, then maybe you can save a soul. Or whatever. So you click Reply but before you can type in your comment, you first have to watch a 13-second ad for a European car.

    But of course the comment was never real, but you, the car, and your money definitely are.

  • romanovcode 19 hours ago

    > This trusted identity should be something governments need to implement.

    I rather live with dead-internet than this oppressive trash.

  • dom96 19 hours ago

    What do governments need to implement? They already give you a passport which can be used as a digital ID.

    • JimDabell 19 hours ago

      Services need the ability to obtain an identifier that:

      - Belongs to exactly one real person.

      - That a person cannot own more than one of.

      - That is unique per-service.

      - That cannot be tied to a real-world identity.

      - That can be used by the person to optionally disclose attributes like whether they are an adult or not.

      Services generally don’t care about knowing your exact identity but being able to ban a person and not have them simply register a new account, and being able to stop people from registering thousands of accounts would go a long way towards wiping out inauthentic and abusive behaviour.

      I think DID is one effort to solve this problem, but I haven’t looked into it enough to know whether it’s any good:

      https://www.w3.org/TR/did-core/

      • dom96 19 hours ago

        Agreed that offering an identifier like this would be ideal. We should be fighting for this. But in the meantime, using a passport ticks most of the boxes in your list.

        I’m currently working on a social network that utilises passports to ensure account uniqueness. I’m aware that folks can have multiple passports, but it will be good enough to ensure that abuse is minimal and real humans are behind the accounts.

        • JimDabell 18 hours ago

          The main problem with this is that a hell of a lot of people don’t want to give sensitive personal documents to social media platforms.

          • dom96 18 hours ago

            Yeah. That will be the challenge.

            I hope that enough are willing to if the benefits and security are explained plainly enough. For example, I don’t intend to store any passport info, just hashes. So there should be no risk, even if the DB leaks.

            • fwip 13 hours ago

              First, not everyone has passports - there are roughly half as many US passports as Americans.

              Second, how much of the passport information do you hash that it's not reversible? If you know some facts about your target (imagine a public figure), could an attacker feasibly enumerate the remaining info to check to see if their passport was registered in your database? For example, there are only 2.6 billion possible American passport numbers, so if you knew the rest of Taylor Swift's info, you could conceivably use brute-force to see if she's in your database. As a side effect, you'd now know her passport number, as well.

              • AnthonyMouse 13 hours ago

                > Second, how much of the passport information do you hash that it's not reversible?

                That doesn't even matter. You could hash the whole passport and the passport could contain a UUID and the hash db would still be usable to correlate identities with accounts, because the attacker could separately have the victim's complete passport info. Which is increasingly likely the more sites try to use passports like this, because some won't hash them or will get breached sufficiently that the attackers can capture passport info before it gets hashed and then there will be public databases with everybody's complete passport info.

                • fwip 10 hours ago

                  Good point.

              • dom96 11 hours ago

                That’s a very good point and indeed it is a potential attack vector.

                I’ll have to think about that. Perhaps I can get away with not tying the passport hash to a particular user.

    • secabeen 19 hours ago

      Less than half of Americans have passports, and of the remaining half, a significant fraction do not have the necessary documents to obtain one. Many of these people are poor, people of color, or marginalized in other ways. Government ID is needed, but you generally find the GOP against actually building a robust, free, ubiquitous system because it would largely help Americans who vote Democratic. This is also why the GOP pushes Voter ID, but without providing any resources to ensure that Americans can get said ID.

      • int_19h 18 hours ago

        To be fair, you generally don't see Dems pushing for such a free and ubiquitous system, either - "voter ID is bad" is so entrenched on that side of the aisle that any talk about such a system gets instant pushback, details be damned.

    • mrybczyn 19 hours ago

      Passport might be a bit onerous - it's expensive and painful process and many don't need it.

      But it's a hilarious sign of worldwide government incompetence that social insurance or other citizen identification cards are not standard, free, and uniquely identifiable and usable for online ID purposes (presumably via some sort of verification service / PGP).

      Government = people and laws. Government cannot even reliably ID people online. You had one job...

      • int_19h 18 hours ago

        When it comes to government-issued IDs, "standard" and "free" is a solved problem in almost every country out there. US is a glaring exception in this regard, particularly so among developed countries. And it is strictly a failure of policy - US already has all the pieces in place for this, they just need to be put together with official blessing. But the whole issue is so politicized that both major parties view it as unacceptable deviation from their respective dogmas on the subject.

      • JimDabell 18 hours ago

        > But it's a hilarious sign of worldwide government incompetence that social insurance or other citizen identification cards are not standard, free, and uniquely identifiable and usable for online ID purposes (presumably via some sort of verification service / PGP).

        Singapore does this. Everybody who is resident in Singapore gets an identity card and a login for Singpass – an OpenID Connect identity provider that services can use to obtain information like address and visa status (with user permission). There’s a barcode on the physical cards that can be scanned by a mobile app in person to verify that it’s valid too.

      • secabeen 19 hours ago

        In the United States, the lack of citizen identification cards is largely due to Republican opposition. People who lack ID are more likely to be democratic voters, so there is an incentive to oppose getting them ID. There's also a religious element for some people, connected to Christian myths about the end of the world.

        • cryptonector 17 hours ago

          This is utter nonsense.

          • consteval 16 hours ago

            It's kind of half true - there is an association between not having an ID and being blue. Because people without IDs are more likely to be people of color or of other marginalized groups, which then are more likely to be blue.

            In addition, there's a strong conservative history of using voter id as a means of voter suppression and discrimination. This, in turn, has made the blue side immediately skeptical of identification laws - even if they would be useful.

            So, now the anti-ID stuff is coming from everywhere.

            • cryptonector 14 hours ago

              It's absolutely not true. People have to supply IDs for tons of activities. They have IDs. We know who they are. They are registered to vote -- how did that happen w/o ID? Of course they have IDs.

              • consteval 13 hours ago

                The statistics just don't back this up. Plenty of, predominantly poor, people don't have driver's licenses. And that's typically the only ID people have. Also, poorer people may work under the table or deal in cash.

                • cryptonector 13 hours ago

                  Link the stats please. There are ID types other than driver's licenses. In fact, the DMVs around the country issue non-driver IDs that are every bit as good as driver licenses as IDs.

              • fwip 13 hours ago

                Where do you get this idea that you need to have an ID card in order to register to vote? It's certainly not a federal requirement.

                In NY, you can register with ID, last 4 digits of your social, or leave it blank. If you leave it blank, you will need to provide some sort of identification when voting, but a utility bill in your name and address will suffice.

  • solumunus 19 hours ago

    > For example, what happens if Google cracks down hard on this and suddenly 60-80% of YouTube traffic (or even ad-traffic) evaporates because it was done by bots? It would wipe out their revenue.

    Nonsense. Advertisers measure results. CPM rates would simply increase to match the increased value of a click.

  • kjkjadksj 19 hours ago

    On the other hand I think the best social media out there today is 4chan. Entirely anonymous. Also, the crass humor and nsfw boards act as a great filter to keep out advertising bot networks from polluting the site like it did with reddit. No one one wants to advertise on 4chan or have their brand associated with it, which is great for quality discussion on technical topics and niche interests.

    • dom96 18 hours ago

      4chan is actually one of the worst social media out there. They are responsible for a hell of a lot of hate campaigns out there. Anonymity breeds toxicity.

      • AnthonyMouse 13 hours ago

        Anonymity breeds veracity. As soon as you force people to identify themselves they start lying to you whenever the truth would be controversial. They refuse to concede when someone proves them wrong because now they're under pressure to save face. It's why Facebook's real name policy causes the place to be so toxic.

        • drawkward 11 hours ago

          This is such a weird take. Anonymity lets people behave with impunity.

          • AnthonyMouse 2 hours ago

            Anonymity lets people speak with impunity, which is precisely what allows them to say true things they would otherwise be deterred from saying.

            They could also say false things, but unverifiable claims from an anonymous source have no credibility.

        • dom96 11 hours ago

          It really doesn’t. Anonymous people are far more likely to lie, obscure facts and mislead to make you support whatever cause they want to strengthen.

          • AnthonyMouse 11 hours ago

            People do exactly that under their real names. If anything they do it more as a form of virtue signaling because they have to be seen supporting their tribe's causes.

xnorswap 19 hours ago

The second histogram looks more human than the "not bot" first one?

Second user clearly takes a look before work, during their lunch-break and then after work?

  • welder 17 hours ago

    There frequency of "game-changing" in comments would say otherwise. It's probably cron running at those intervals, not a work schedule.

    • rozab 13 hours ago

      I hate game-changing as much as the next guy, and was ranting about it on here just the other day, but some people really do talk like that.

      Have you tried running any network analysis on these bots? I would expect to see strong clustering, and I think that's usually the primary way these things are identified. The prompt injection is an awesome approach though!

      • welder 13 hours ago

        Yes I did on subsets of the data because cupy and cudf haven't implemented intersection functions yet for the GPU. But the clustering is weak because new signups are cheap so they burn/throwaway accounts after one targeted vote. Normally clustering works with more than one common vote between users?

  • ImPostingOnHN 14 hours ago

    There's also the point that one histogram is an order or two of magnitude larger than the other one. Larger samples of normally distributed data will tend to better resemble a normal distribution.

tomthecreator 19 hours ago

I wonder the same about HN. Has anyone done this kind of analysis? Me good LLM

  • datadrivenangel 19 hours ago

    HN remains good primarily because of the moderators. Thanks Dang!

    • imiric 19 hours ago

      The mods certainly do a great job of keeping things running smoothly here, but I wouldn't say it's _primarily_ because of them.

      I think it's primarily due to the self-moderation of the community itself, who flag and downvote posts, follow the community guidelines, and are still overall relatively civil compared to other places.

      That said, any community can be overrun by an Eternal September event, at which point no moderation or community guidelines can save it. Some veteran members would argue that it's already happened here. I would say we've just been lucky so far that it hasn't. The brutalist UI likely plays a part in that. :)

      • conductr 18 hours ago

        I think it has happened actually. Early on HN was almost purely entrepreneurial although through a tech POV. These days, it’s much more general or broadly tech related. The discussion I gather is most people here are tech employees and not necessarily entrepreneurs.

        It’s obviously has not gone to hell like the bot ridden examples, but it’s drastically different IMO.

        • ffsm8 18 hours ago

          The bots aren't completely dominating here yet, because the price/benefit isn't really there yet.

          Twitter is a source of news for some journalists of varying quality, which gives them a motivation to influence.

          On HN, who are you going to convince and what for?

          The only thing that would come to mind would be to convince venture capital to invest in your upstart, but you'd have to keep it up while convincing the owners of the platform that you're not faking it - which is gonna be extra hard as they have all usage data available, making it significantly harder to fly under the radar.

          Honestly, I just don't see the cost/benefit of spamming HN to change until it gets a lot cheaper so that mentally ill ppl get it into their head that they want to "win" a discussion by drowning out everything else

          • imiric 18 hours ago

            > On HN, who are you going to convince and what for?

            There are plenty of things bots would be useful for here, just as they are on any discussion forum. Mainly, whenever someone wants to steer the discussion away from or towards a certain topic. This could be useful to protect against bad PR, to silence or censor certain topics from the outside by muddying up the discussion, or to influence the general mindset of the community. Many people trust comments that seem to come from an expert, so pretending to be one, or hijacking the account of one, gets your point across much more easily.

            I wouldn't be so sure that bots aren't already dominating here. It's just that it's frowned upon to discuss such things in the comments section, and we don't really have a way of verifying it in any case.

          • 0cf8612b2e1e 14 hours ago

            Plenty of early stage startups who would love to discretely shrill their tool. Even better, bad mouth the competition.

          • pixl97 18 hours ago

            >On HN, who are you going to convince and what for?

            Eh, following individuals and giving them targeted attacks may well be worth it. There are plenty of tech purchasing managers here that are responsible for hundreds of thousands/millions in product buys. If you can follow their accounts and catch posts where they are interested in some particular technology it's possible you could bid out a reply to it and give a favorable 'native review' for some particular product.

      • Loughla 14 hours ago

        This feels like a chat got response.

        Restatement of op's point. Small reason of agreement based on widely public information. Last paragraph indicating the future cannot be predicted and couching the entire thing in terms of a guess or self-contradiction.

        This is how chatgpt responds to generic asks about things.

    • wickedsight 19 hours ago

      Agreed. There's some Dang good work being done here. Hope he gets rewarded well for it.

    • aio2 13 hours ago

      daddy dang to the rescue

    • bediger4000 19 hours ago

      Wait you mean absolute freedom of speech, countering bad speech with more good speech doesn't work?

  • yodon 19 hours ago

    Real question for those convinced HN is awash in HN bots: What actual value do you believe there is, other than curiosity, that is driving people to build the HN spam bots you think you're seeing?

    Karma doesn't help your posts rank higher.

    There is no concept of "friends" or "network."

    Karma doesn't bring any other value to your account.

    My personal read is it's just a small steady influx of clueless folks coming over from Reddit and thinking what works there will work here, but I'm interested in your thoughts.

    • lompad 19 hours ago

      Hype. HN is _the_ platform to create hype among the early adopter, super-spreader/tech exec kind of people and because of that has an absolutely massive indirect reach.

      Just look how often PR reps appear here to reply to accusations - they wouldn't bother at all if this was just some random platform like reddit.

      • panarky 18 hours ago

        I'm not convinced HN is awash in bots, but there are certainly some inauthentic accounts here.

        What if you want to change public opinion about $evilcorp or $evilleader or $evilpolicy? You could explain to people who love contrarian narratives how $evilcorp, $evilleader and $evilpolicy are actually not as bad as mainstreamers believe, and how their competitors and alternatives are actually more evil than most people understand.

        HN is an inexpensive and mostly frictionless way to run an inception campaign on people who are generally better connected and respected than the typical blue check on X.

        Their objective probably isn't to accumulate karma because karma is mostly worthless.

        They really only need enough karma to flag posts contrary to their interests. Even if the flagged posts aren't flagged to death, it doesn't take much to downrank them off the front page.

      • duckmysick 18 hours ago

        I've hardly seen here any proselytizers from Oracle, Salesforce, IBM and they are dong just fine. Ditto for Amazon/Google/Microsoft/Facebook - they used to be represented more here, but their exodus hardly made any difference.

        Gartner has more influence on tech than Hacker News.

      • DowagerDave 19 hours ago

        maybe 10 years ago, but this is not the case today.

    • BobbyJo 19 hours ago

      Social media accounts with high engagement, and a long life, have monetary value. This is true of most social media platforms.

      HN generally does a good job of minimizing the value of accounts, thus discouraging these kinds of games, but I imagine it still happens.

      • Narhem 16 hours ago

        The type of engagement and audience arguably matters more.

    • jppope 19 hours ago

      In theory you could Show HN and have your bots upvote it... that would indeed be good marketing.

      • diggan 18 hours ago

        Vote-rings are trivial to detect though, automated or manual. I'd be surprised if HN hasn't figured out ways against it during the time it's been online.

    • hashmap 19 hours ago

      Karma is a number that can go up. Numbers going up is a supernormal stimulus for humans.

      • criddell 19 hours ago

        They should get rid of the number and change it to be only "low" or "high".

        • diggan 18 hours ago

          Get rid of karma + get rid of ranking comments at all. Just render them in a tree-format with oldest/newest first, everyone has equal footing :)

      • vunderba 12 hours ago

        True but at least for Hacker News you have to at least click through to the member profile to see how many banana stickers and external validation they've accrued.

    • cryptonector 18 hours ago

      > What actual value do you believe there is, other than curiosity, that is driving people to build the HN spam bots you think you're seeing?

      Testing.

      And as siblings say, karma is more valuable than you might think. If you can herd a bunch of karma via botting, you can then [maybe] use that karma to influence all sorts of things.

      • yodon 17 hours ago

        > If you can herd a bunch of karma via botting, you can then [maybe] use that karma to influence all sorts of things.

        How? Karma on HN is not like Karma elsewhere. The idea of [maybe] monetizing HN Karma reads like the old southpark underpants gnome meme[0].

        [0]https://imgflip.com/memetemplate/49245705/Underpants-Gnomes

        • cryptonector 14 hours ago

          Assuming karma helps you get posts on the front page (does it?) then karma helps spam.

          At any rate, HN attracts trolls. I'm sure it will also attract trolls who use AI to increase their rate of trolling.

          • yodon 12 hours ago

            >Assuming karma helps you get posts on the front page (does it?)

            No, karma does not help you get posts on the front page.

    • zoeysmithe 19 hours ago

      To promote political views and startups and scams, and other things that benefit the bot operators.

      This is a small but highly influential forum and absolutely is gamed. Game theory dictates it will be.

    • bediger4000 18 hours ago

      If you turn on show dead, you'll see that some accounts just post spam or weird BS that ends up instantly dead. I think Evon LaTrail is gone now, but for years posted one or more links to his/her/their YouTube videos about personal sanitation and abortion per day.

      There is a stream of clueless folks, but there are also hardcore psychos like LaTrail. The Svelte magazine spammer fits in this category.

      • vunderba 12 hours ago

        I've definitely seen comments that feel very authentically posted (not LLM generated) but are a weird mixture of vitriol and spite, and when you list other comments from that user it's 90% marked dead.

        I often wonder if the user is even aware that they're just screaming into the void.

  • vunderba 12 hours ago

    I'd like to think we have enough of a proactive community to mitigate this issue for the most part - just set your profile back to Show Dead / etc. if you want to see the amount of chaff that gets discarded.

    Also, wasn't the initial goal of lobste.rs to be a sort of even more "mensa card carrying members only" exclusive version of Hacker News?

  • reaperducer 19 hours ago

    In the seven years I've been on HN, it has gone through different phases, each with a noticeable change in the quality of the comments.

    One big shift came at the beginning of COVID, when everyone went work-from home. Another came when Elon Musk bought X. There have been one or two other events I've noticed, but those are the ones I can recall now. For a short while, many of the comments were from low-grade Russian and Chinese trolls, but almost all of those are long gone. I don't know if it was a technical change at HN, or a strategy change externally.

    I don't know if it's internal or external or just fed by internet trends, but while it is resistant, HN is certainly not immune from the ills affecting the rest of the internet.

    • jperras 19 hours ago

      16 year HN vet here.

      This place has both changed a _lot_ and also very little, depending on which axis you want to analyze. One thing that has been pretty consistent, however, is the rather minimal amount of trolls/bots. There are some surges from time to time, but they really don't last that long.

    • SoftTalker 19 hours ago

      HN has mechanisms to detect upvotes and comments that seem to be promoting a product or coordinated in some other way. I'm not sure what they do behind the scenes or how effective it is but it's something. Also other readers downvote bot spam. Obvious bot/LLM-generated comments seem to be "dead" quite often, as are posts that are clearly just content/ad farm links or product promotions or way off-topic.

    • diggan 19 hours ago

      How are you so sure these users are actually bots? Just because someone disagrees with you about Russia or China doesn't mean that's evidence of a bot, no matter how stupid their opinion is.

      • iterateoften 19 hours ago

        Hackernews isn’t the place to bring that up regardless of your opinion. So out of context political posts should be viewed with at least some scrutiny.

        • imiric 17 hours ago

          That's true, but maybe there should be a meta section of the site where these topics can be openly discussed?

          While I appreciate dang's perspective[1], and agree that most of these are baseless accusations, I also think that it's inevitable that a site with seemingly zero bot-mitigation techniques, where accounts and comments can be easily automated, doesn't have some or, I would wager _a lot_, of bot activity.

          I would definitely appreciate some transparency here. E.g. are there any automated or manual bot detection and prevention techniques in place? If so, can these accounts and their comments be flagged as such?

          [1]: https://news.ycombinator.com/item?id=41710142

          • intended 13 hours ago

            There are a few horsemen of the online community apocalypse,

            1) Politics 2) Religion 3) Meta

            Fundamentally - Productive discussion is problem solving. A high signal to noise ratio community is almost always boring, see r/Badeconomics for example.

            Politics, religion are low barrier to entry topics, and always result in flame wars, that then proceed to drag all other behavior down.

            Meta is similar: To have a high signal community, with a large user base, you filter out thousands of accounts and comments, regularly. Meta spaces inevitably become the gathering point for these accounts and users, and their sheer volume ends up making public refutations and evidence sharing impossible.

            As a result, meta becomes impossible to engage with at the level it was envisioned.

            In my experience, all meta areas become staging grounds to target or harass moderation. HN is unique in the level of communication from Dang.

        • diggan 19 hours ago

          This I agree with, off-topic is off-topic and should be removed/flagged. But I'm guessing we're not talking about simple rule/guidelines-breaking here.

      • metalliqaz 19 hours ago

        I don't know about anyone else, but to me a lot of bot traffic is very obvious. I don't have the expertise to be able to describe the feeling that low quality bot text gives me, but it sticks out like a sore thumb. It's too verbose, not specific enough to the discussion, and so on.

        I'm sure there are real pros who sneak automated propaganda in front of my eyes with my notice, but then again I probably just think they are human trolls.

        • diggan 19 hours ago

          > but it sticks out like a sore thumb

          Could you give some examples of HN comments that "sticks out like a sore thumb"?

          > It's too verbose, not specific enough to the discussion, and so on.

          That to me just sounds like the average person who feels deeply about something, but isn't used to productive arguments/debates. I come across this frequently on HN, Twitter and everywhere else, including real life where I know for a fact the person I'm speaking to is not a robot (I'm 99% sure at least).

          • metalliqaz 17 hours ago

            sorry, I didn't mean to give the impression that I was talking about HN comments specifically. I was talking about spotting bot content out on the open Internet.

            as for verbosity, I don't mean simply using a lot of text, but rather using a lot of superfluous words sentences.

            people tend not to write in comments the way they would in an article.

      • reaperducer 17 hours ago

        How are you so sure these users are actually bots?

        I stated nothing about bots. Re-read what I wrote.

        • diggan 13 hours ago

          Bots, trolls, foreign agents, a dear child has many names. Point is the same, name calling without evidence does nothing to solve the problem.

      • simion314 19 hours ago

        [flagged]

        • diggan 17 hours ago

          > If the account is new and promoting Ruzzian narrative by denying the reality I can be 99% sure it is a paid person copy pasting arguments from a KGB manual, 1% is a home sovieticus with some free time.

          I'm not as certain as you about that. Last time the US had a presidential election, it seems like almost half the country is either absolutely bananas and out of their mind, or half the country are robots.

          But reality turns out to be less exciting in reality. People are just dumb, and spew whatever propaganda they happen to come across "at the right time". Same is true for Russians as it is for Americans.

      • JohnMakin 19 hours ago

        Anyone who’s spent any amount of time in this space can spot them pretty quickly/easily. They tend to stick to certain scripts and themes and almost never deviate.

        • dang 18 hours ago

          In my experience, that's not true. Rather, people are much too quick to jump to the conclusion that so-and-so is a bot (or a troll, a shill, a foreign agent, etc.), when the other's views are outside the range of what feels normal to them.

          I've written a lot of about this dynamic because it's so fundamental. Here are some of the longer posts (mini essays really):

          https://news.ycombinator.com/item?id=39158911 (Jan 2024)

          https://news.ycombinator.com/item?id=35932851 (May 2023)

          https://news.ycombinator.com/item?id=27398725 (June 2021)

          https://news.ycombinator.com/item?id=23308098 (May 2020)

          Since HN has many users with different backgrounds from all over the world, it has a lot of user pairs (A, B) where A's views don't seem normal to B and vice versa. This is why we have the following rule, which has held up well over the years:

          "Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data." - https://news.ycombinator.com/newsguidelines.html

          • JohnMakin 17 hours ago

            In my research and experience, it is. I’m making no comment about bots/shills on this site, either, I’m responding to the plausibility of the original comment.

            • diggan 17 hours ago

              > I’m making no comment about bots/shills on this site, either, I’m responding to the plausibility of the original comment.

              The original comment:

              > I wonder the same about HN. Has anyone done this kind of analysis? Me good LLM

              Slightly disingenuous to argue from the standpoint of "I'm talking about the whole internet" when this thread is specifically about HN. But whatever floats your boat.

          • imiric 17 hours ago

            Hey, I would appreciate if you could address some of my questions here[1].

            I do think it's unrealistic to believe that there is absolutely zero bot activity, so at least some of those accusations might be true.

            [1]: https://news.ycombinator.com/item?id=41711060

            • dang 14 hours ago

              The claim is not "zero bot activity" - how would one even begin to support that?

              Rather, the claim is that accusations about other users being bots/shills/etc. overwhelmingly turn out, when investigated, to have zero evidence in favor of them. And I do mean overwhelmingly. That is perhaps the single most consistent phenomenon we've observed on HN, and it has strong implications.

              If you want further explanation of how we approach these issues, the links in my GP comment (https://news.ycombinator.com/item?id=41710142) go into it in depth. If you read those and still have a question that isn't answered there, I can take a crack at it. Since you ask (in your other comment) whether HN has any protections against this kind of thing at all, I think you should look at those past explanations—for example the first paragraph of https://news.ycombinator.com/item?id=27398725.

              • imiric 14 hours ago

                Alright, thanks. I read your explanations and they do answer some of my questions.

                I'm still surprised that the percentage of this activity here is so low, below 0.1%, as you say. Given that the modern internet is flooded by bots—over 60% in the case of ProductHunt as estimated by the article, and a third of global internet traffic[1]—how do you a) know that you're detecting all of them accurately (given that it seems like a manual process that takes a lot of effort), and b) explain that it's so low here compared to most other places?

                [1]: https://investors.fastly.com/news/news-details/2024/New-Fast...

                • intended 13 hours ago

                  From what I understand - users accuse others of being shills and bots, and are a largely wrong.

                  Dang and team use other tools to remove the actual bots that they can find evidence for.

                  So yes, there are bots, but human reports, tend to be more about disagreements, than actual bot identification.

                • dang 13 hours ago

                  intended's reply is correct.

                  Most of the bot activity we know about on HN has to do with voting rings and things like that, people trying to promote their commercial content. To the extent that they post things, it's mostly low-quality stuff that either gets killed by software, flagged by users, or eventually reported to us.

                  When it comes to political, ideological, nationalistic arguments and the like, that's where we see little (if any) evidence. Those are the areas where users are most likely to accuse each other of not being human, or posting in bad faith, etc., so that's what I've written about in the posts that I linked to.

                  There's still always the possibility that some bad actors are running campaigns too sophisitcated for us to detect and crack down on. I call this the Sufficiently Smart Manipulator problem and you can find past takes on it here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

                  I can't say whether or not this exists (that follows by definition—"sufficiently" means smart enough to evade detection). All I can tell you is that in specific cases people ask us to look into, there are usually obvious reasons not to believe this interpretation. For example would a sufficiently smart manipulator be smart enough to have been posting about Julia macros back in 2017, or the equivalent? You can always make a case for "yes" but those cases end up having to stretch pretty thin.

                  • JohnMakin 10 hours ago

                    Dang, I agree with and appreciate your moderation approach here and I completely agree with most of what you said. IME the last 18 months or so I’ve been here, this has been a welcome bastion against typical bot/campaign activity. Nowhere in the web seems safe the last ~dozen years. Most of what I’ve written here applies to my research of foreign bot activity on social networks, particularly in election years, in which you can far more easily piece together associations between accounts and narratives and writing style and piece together a lot more dots than on a site like this - and conclude very definitively that yes, this is a bot.

                    My original comment was just meant to chime in that, in the wild the last ten years, I’ve encountered an extraordinary amount of this kind of activity (which I confirmed - I really do research this stuff on the side and have written quite a lot about it) - that would support credibility to anyone that felt they experienced bot activity on this site. I haven’t done a full test on this site yet, because I don’t think it’s allowed, but at a glance I suspect particular topics and keywords attract swarms of voting/downvoting stuff, which you alluded to in your post. I think the threshold of 500 upvotes to downvote is a bit low, but clearly to me what you are doing is working. I’m only writing all of this out to make it very clear I am not making any criticisms or commentary about this site and how it handles bots/smurfs/etc.

                    Most of my research centers around 2016,2020 political cycles. Since the invention, release, and mass distribution of LLM’s I personally think this stuff has proliferated far beyond what anyone can imagine right now, and renders most of my old methods worthless, but for now that’s just a hypothesis.

                    Again, I appreciate the moderation of this site, it’s one of the few places left I can converse with reasonably intelligent and curious people compared to the rest of the web. Whatever you are doing, please keep doing it.

                  • imiric 12 hours ago

                    Thank you. I appreciate your positive outlook on these things. It helps counteract my negative one. :)

                    For example, when you say "The answer to the Sufficiently Smart Manipulator is the Sufficiently Healthy Community", that sounds reasonable, but I see a few issues with it.

                    1. These individuals are undetectable by definition. They can infiltrate communities and direct conversations and opinions without raising any alarms. Sometimes these are long-term operations that take years, and involve building trust and relationships. For all intents and purposes, they may seem like just another member of the community, which they partly are. But they have an agenda that masquerades as strong opinions, and are protected by tolerance and inclusivity, i.e. the paradox of tolerance.

                    2. Because they're difficult to detect, they can easily overrun the community. What happens when they're a substantial percentage of it? The line between fact and fiction becomes blurry, and it's not possible to counteract bad arguments with better ones, simply because they become a matter of opinion. Ultimately those who shout harder, in larger numbers, and are in a better position to, get heard the most.

                    These are not some conspiracy theories. Psyops and propaganda are very real and happen all around us in ways we often can't detect. We can only see the effects like increased polarization and confusion, but are not able to trace these back to the source.

                    Moreover, with the recent advent of AI, how long until these operations are fully autonomous? What if they already are? Bots can be deployed by the thousands, and their capabilities improve every day.

                    So I'm not sure that a Sufficiently Healthy Community alone has a chance of counteracting this. I don't have the answer either, but can't help but see this trend in most online communities. Can we do a better job at detection? What does that even look like?

                    • tptacek 6 hours ago

                      If you come up with good ideas on this problem you should share them, but the core of this thread is that having commenters on thread calling out other commenters as psyops, propaganda, bots, and shills doesn't work, and gravely harms the community, far more than any psyop could.

                      • fzeroracer 3 hours ago

                        Does it, though? The reason why I ask such a loaded question is because I believe this is actually part of the 'healthy community' framework. It can be thought of as the communities immune system responding to what they perceive as outside threats to the system and is, in my opinion one of the most well known phenomenon in internet communities that far predates HN.

                        The modern analogy of this problem is described as the 'Nazi Bar' problem and is related to the whole Eternal September phenomenon. I think HN does a good enough job of kicking out the really low quality posters, but the culture of a forum will always gradually shift based on the fringes of what is allowed or not.

        • diggan 19 hours ago

          How is that different from humans? Humans have themes/areas they care more about, and are more likely to discuss with others. It's not hard to imagine there are Russians/Chinese people caring deeply about their country, just like there are Americans who care deeply about US.

          • dcminter 19 hours ago

            A human aggressively taking a particular line and a bot doing so may be equivalent; do we need to differentiate there?

            • diggan 18 hours ago

              If the comment is off-topic/breaking the guidelines/rules, it should be removed, full stop.

              The difference is that the bots comment should be removed regardless if the particular comment is breaking the rules or not, as HN specifically is a forum for humans. The humans comment, granted it doesn't break the rules, shouldn't, no matter how shitty their opinion/view is.

              • dcminter 16 hours ago

                If posts make HN a less interesting place to converse I don't see why humans should get a pass & I don't see anything in the guidelines to support that view either.

          • JohnMakin 19 hours ago

            C’mon. When you have an account that is less than a year old and has 542 posts, 541 of which are repeating very specific kremlin narratives verbatim, it isn’t difficult to make a guess. Is your contention that they are actually difficult to spot, or that they don’t exist at all? because both of those views are hilariously false.

            • diggan 19 hours ago

              I feel like you're speaking about specific accounts here, since it's so obvious and exact. Care to share the HN accounts you're thinking about here?

              My contention is that people jump to "It's just a bot" when they parrot obvious government propaganda they disagree with, when the average person is as likely to parrot obvious propaganda without involving computers at all.

              People are just generally stupid by themselves, and reducing it to "Robots be robotting" doesn't feel very helpful when there is an actual problem to address.

              • JohnMakin 19 hours ago

                No, I'm not. And I don't/won't post any specific accounts. I'm speaking more generally - and no one is jumping to anything here, you're projecting an argument that absolutely no one is making. The original claim was that russian/chinese bots were on this platform and left. I've only been here about 1.5 years, so I don't know the validity of that claim, but I have a fair amount of experience and research in the last ten years or so on the topic of foreign misinformation campaigns on the web, so it sounds like a very valid claim, given how proliferate these campaigns were across the entire web.

                It isn't an entirely new concept or unknown, and that isn't what is happening here. You're making a lot of weird assumptions, especially given the fact that the US government wrote several hundred pages about this exact topic years ago.

                • diggan 18 hours ago

                  > and no one is jumping to anything here, you're projecting an argument that absolutely no one is making

                  You literally claimed "when you have accounts with these stats, and they say these specific things, it isn't difficult to guess..." which ends with "that they're bots" I'm guessing. Read around in this very submission for more examples of people doing "the jump".

                  I'm not saying there isn't any "foreign misinformation campaigns on the web", so not sure who is projecting here.

                  • JohnMakin 5 hours ago

                    I “literally” did not say that. You seem to be doing the very thing I said, projecting arguments no one is making. Certainly I’m not.

        • yodon 19 hours ago

          Ten years ago those accounts existed, too. Back then we called them "people."

          • JohnMakin 19 hours ago

            Not at all - ten years ago russian misinformation campaigns on twitter and meta platforms were alive and well. There was an entire several hundred page report about it, even.

doctorpangloss 19 hours ago

Do you think TikTok view counts are real?

Alternatively, is there anything stopping TikTok from making up view count numbers?

Facebook made up video view counts. So what?

TikTok can show a video to as many, or as few, people as it wants, and the number will go up and down. If the retention is high enough, for some users, to show ads, which are the videos that the rules I'm describing apply to with certainty, why can't it apply those rules to organic videos too?

It's interesting. You don't need bots to create the illusion of engagement. Unless you work there, you can't really prove or disprove that user activity on many platforms is authentic.

cynicalpeace 19 hours ago

I wonder how much of Meta and other social media ad revenue is based on bot activity.

You can setup a campaign where you pay for comments and you're actually paying Meta to show your ad to a bunch of bots.

Does anyone have more resources/inside info that confirms/denies this suspicion?

  • FredPret 18 hours ago

    I do - I think the effect on Meta’s ad revenue is nil.

    Advertisers measure ad campaigns by ROAS (return on ad spend). This is driven by actual dollars spent, cutting out all bots right away.

    Clicks / views / comments are irrelevant except as far as getting the ad to show for actual buyers.

    • cynicalpeace 18 hours ago

      I've advertised quite a bit on Meta, and also hired people to do it for me. ROAS, for me, is the most important metric, but it's not the only metric people look at. Speaking from experience, plenty of people like to look at and optimize other metrics.

      • FredPret 18 hours ago

        For interest’s sake, what other metrics do you find most valuable?

  • doctorpangloss 19 hours ago

    > You can setup a campaign where you pay for comments

    You cannot setup a campaign where you pay for comments (https://www.facebook.com/business/help/1438417719786914#). But maybe you mean other user generated content like messages. You ought to be able to figure out pretty quickly if those are authentic.

    • cynicalpeace 18 hours ago

      My bad, I guess I was thinking of engagement campaigns and yes messaging campaigns

    • kjkjadksj 19 hours ago

      Do you care if they are authentic? Probably not. You are in the business of getting clicks on an ad. Your client has to worry about actually converting that to product sales. Thats their problem and not yours as the ad firm.

      • cynicalpeace 18 hours ago

        I've heard it said that Meta prioritizes like this:

        1. Users 2. Meta 3. Advertisers

        I have a feeling it's actually:

        1. Meta 2. Users 3. Advertisers

        But in the end, advertisers always end up on the bottom. Especially since advertisers need Meta more than Meta needs any one of them.

      • doctorpangloss 18 hours ago

        Meta and ad agencies are definitely incentivized that your ads convert and that engagement is authentic. Otherwise why deal with them at all? SMBs are like 40% of Meta's revenues, the entire world finds the product valuable. It is a little glib to try to make the contrarian case on conversion.

        The toxic part of their incentives is that they want businesses to commit large budgets on campaigns whose performance you can measure with very little money spent and very little data.

        • cynicalpeace 18 hours ago

          Yes definitely. But then that last sentence sorta leads to the first question you had- why deal with them at all?

          I think it's because despite that toxicity it still seems to be the best ad platform in town. Haven't seen anybody suggest a better alternative. Feels almost monopolistic.

  • dzink 19 hours ago

    Facebook has become full of generic name accounts posting generic AI generated content. Despite all flagging it is incessant, which tells me it’s likely sanction or even created by the company to fill content gaps. I’d say 30-50% or content that shows for me is suspicious.

  • ToastBackwards 18 hours ago

    I often wonder this as well. Would love to have more insight into that

  • jajko 19 hours ago

    If people are paying lets say these morally questionable companies like meta ad campaigns, they deserve what they get. I don't want to condone any criminal behavior, but this whole business of people's mass manipulation is vastly immoral bunch of white (or not so white) lies.

    • cynicalpeace 19 hours ago

      Who do you suggest as an alternative for paid ads?

      • kjkjadksj 18 hours ago

        Building relationships with clients, same as it ever was. There are companies today that have been selling for example very specific machined parts for 100 years. You have never heard of them. They don’t advertise on facebook. Yet they bring in enough work to stay in business without these paid campaigns. The secret sauce? The rolodex and actually calling potential clients directly.

        • n_ary 18 hours ago

          Well, about that thing. Some of our local companies in machining and other things somehow buy private emails and phone numbers. While I work at a place that do not directly need such services, my spam box and my work phone(mobile) blocklist is full of services calling me to offer their latest price and if I can forward them to my boss or whatever. So, either online ads or other forms of spamming.

        • cynicalpeace 18 hours ago

          This answer is a good answer for some companies, but for other companies it's very hand-wavy. Paid ads have value and you can make a pretty penny even on Meta (actually, especially on Meta compared to others) if you do it right.

          Still curious about alternatives for paid ads

          • jajko 15 hours ago

            Sure you can make penny, you can do a lot of pennies on various amoral businesses, often the deeper this shit goes the more gold is in it.

            I call it amoral, nobody even trying to object it since we all know reality, and I stand by it. It slowly but surely destroys future of our kids and makes it bleaker, and objectively worse. Maybe not massively (and maybe yes, I don't know and neither do you), and its hard to pinpoint a single actor, so lets point it in ad business.

            But I guess as long as you have your 'pretty penny' thats all you care about? I don't expect much sympathy on a forum where better half of participants work for the worst offenders, 'pretty penny' it is as we all know, but curious about a single good argument about that pesky morality.

            • cynicalpeace 15 hours ago

              That response was not to your comment, but a different comment.

              I don't see why advertising is particularly moral or immoral. Depends on the platform, content, product, etc. Which is why I asked you for suggestions about other ad platforms.

              • 9dev 13 hours ago

                Advertising is amoral because it’s end game is always sacrificing things humans generally regard as valuable—our attention, leisure time, savings—for shareholder revenue. Advertising always has an incentive to increase revenue by being ever more invasive, corrupting anything it touches. As it goes, it shifts our perception of normal—just imagine asking someone from the 1920ies whether they’re okay with ads blasting from gas station pumps, elevators, or toilets. Or if they would be okay with someone watching your every move and deduct what they could offer you when you’re exhausted or miserable and easy prey. Advertisers have convinced us this is normal. It’s not. And it will only ever get worse.

                • cynicalpeace 10 hours ago

                  1920s had advertisements. I’m not saying all ads are good (“it depends”) but also don’t see the case that all ads are inherently bad. Seems very ideological

        • FredPret 18 hours ago

          What?

          How do you meet these clients in the first place?

          How do you get them to answer their phone?

          How do you get word-of-mouth if you’re just starting out?

          Edit: reduced level of snark

          • n_ary 18 hours ago

            Usually they cold call or email with prices/quotes/offers(see my previous comment to parent), and somehow they harvest of buy the contacts of businesses and employees.

            I sometimes suspect that, there are some ways to collect these from linkedin or the business card printers sell the contact info in black(due to strict data privacy act in EU). Because only two places my work email & work phone number being available are at the business card printer and linkedin(we need to use work email to access some elearning things, don’t ask).

arnaudsm 17 hours ago

What's the endgame of a dead internet? Everyone leaves and most interactions happens in private group chats?

It's the serendipity of the original internet I'll miss the most.

  • causal 14 hours ago

    One of two possibilities I foresee, unsure which will play out:

    1) People surrender their perceived anonymity in favor of real interactions, embracing some kind of digital ID that ensures some platforms are human-only.

    or

    2) AI gets good enough that people stop caring whether they're real or not.

    • 9dev 14 hours ago

      I bet my money on 1). Verifiable credentials are currently in the making, I can build products around this in my head immediately (a good sign that someone smarter than me has it figured out already), and huge platforms know so much about you, they’re almost there. It’s going to make interactions online safe, solve fraud, make everything personalised and wholesome. At least that’s going to be the narrative. Just wait for it.

  • jen729w 13 hours ago

    It just gets a bit smaller. See Mastodon, and why most people's criticisms of it are in fact its strengths. For ~5% of the current internet.

mirekrusin 14 hours ago

There you go, start AntiAI, ppl will love it.

CalRobert 19 hours ago

I weep at the thought that every site will require login with sso from google (and maybe Apple if you're lucky). We're close to that already.

If only micropayments had taken off or been included in the original spec. Or there were some way to prove I am human without saying _which_ human I am.

  • cryptonector 17 hours ago

    It would be nice if there were identity providers that could vend attribute certificates with no PII besides the desired attributes, such as:

      - is_human
      - is_over_18
      - is_over_21
      - is_over_65
      - sex/gender?
      - marital status?
      - ...?
      - device_number (e.g., you
        might be allowed N<4 user
        attribute certs, one per-
        device)
    
    and naturally the issuer would be the provider.

    The issuer would have to keep track of how many extant certificates any given customer has and revoke old ones when the customer wants new ones due to device loss or whatever.

    Any company that has widespread physical presence could provide these. UPS, FedEx, grocery stores, USPS, etc.

    • tobias3 14 hours ago

      European eID solutions can do some of those (e.g. is over 18). Let's see if usage becomes more wide-spread.

  • n_ary 18 hours ago

    I am still very curious about why the micropayment failed. I recall mass outrage at Brave for tying the concept with “cryptocurrency” but at the time the concept(minus the crypto and brave holding the tip unannounced if the site didn’t join-in) seemed decent.

    Would the concept work, if it was unbundled from cryptocurrency and made into something like, Paypal, where you add money(prepaid), visit some site, if the site is registered, you see a donate button and decide to donate few cents/dollars/euros/yens whatever the native currency of the author is and at the end of the month, if the donations collected was more than enough to cover the fees + excess, it would get paid out to author’s desired mode of withdrawal?

  • tjalfi 17 hours ago

    Micropayments failed because users hate them[0]. They would rather pay more for flat rate plans. Here's an excerpt from The Case Against Micropayments [1]. It's an old paper, but human behavior hasn't changed.

    Behavioral economics, the study of what had for a long time been dismissed as the economicly irrational behavior of people, is finally becoming respectable within economics. In marketing, it has long been used in implicit ways. One of the most relevant findings for micropayments is that consumers are willing to pay more for flat-rate plans than for metered ones. This appears to have been discovered first about a century ago, in pricing of local telephone calls [13], but was then forgotten. It was rediscovered in the 1970s in some large scale experiments done by the Bell System [3]. There is now far more evidence of this, see references in [13], [14]. As one example of this phenomenon, in the fall of 1996, AOL was forced to switch to flat rate pricing for Internet access.

    The reasons are described in [19]:

    What was the biggest complaint of AOL users? Not the widely mocked and irritating blue bar that appeared when members downloaded information. Not the frequent unsolicited junk e-mail. Not dropped connections. Their overwhelming gripe: the ticking clock. Users didn’t want to pay by the hour anymore. ... Case had heard from one AOL member who insisted that she was being cheated by AOL’s hourly rate pricing. When he checked her average monthly usage, he found that she would be paying AOL more under the flat-rate price of $19.95. When Case informed the user of that fact, her reaction was immediate. ‘I don’t care,’ she told an incredulous Case. ’I am being cheated by you.’

    The lesson of behavioral economics is thus that small payments are to be avoided, since consumers are likely to pay more for flat-rate plans. This again argues against micropayments.

    [0] https://web.archive.org/web/20180222082156/http://www.openp2...

    [1] https://www-users.cse.umn.edu/~odlyzko/doc/case.against.micr...

  • joshdavham 17 hours ago

    > I weep at the thought that every site will require login with sso from google (and maybe Apple if you're lucky)

    I think that's where we're going. Not only is it a decent way of filtering out bad accounts, it's also often easier to implement on the dev side.

    • eikenberry 17 hours ago

      Can't the bots can sign up for google accounts like anyone else?

      • joshdavham 13 hours ago

        They certainly could, but there's usually a bit of extra authentication with some of these third parties. For example, they usually request a phone number.

  • wickedsight 19 hours ago

    > Or there were some way to prove I am human without saying _which_ human I am.

    I'm sure at some point a sort of trust network type thing will take off. Will be hard to find a way to make it both private and secure, but I guess some smart people will figure that out!

ddoolin 19 hours ago

What is the primary point(s) of building bots that do this kind of thing, that seemingly flood the internet with its own Great Internet Garbage Patch?

  • corytheboyd 19 hours ago

    It is always money. You can sell “I will get your launch to top 10 in ProductHunt.” Yes if/when taken too far, everyone will ditch ProductHunt and it will die, but until then, a quick buck can be made.

    • criddell 19 hours ago

      It isn't always money. Sometimes it's just lulz.

      Reply All had did a podcast (ep #178) about people who are running bots on Counter-Strike that ruin the game. They tracked down a person who does this and they just basically do it to be annoying.

      > [... ]what’s the point of running them? Like, what do you get out of the exercise?

      > There are many reasons to run them. Most if not [all] casual players dislike bots (which is a reason to run them)

    • metalliqaz 19 hours ago

      by the looks of the graphs in the linked article, it appears ProductHunt is already a zombie

  • datadrivenangel 19 hours ago

    Existing businesses did this with live humans.

    Ranking/review sites for B2B services would work with paying customers to solicit interviews and reviews from their customers, and of course only the 5 star reviews get posted.

    Heck, a lot of these "bots" may actually be a real human working a table of 100 cell phones in some cheaper country.

  • intended 14 hours ago

    Why not do it? It’s essentially spam / pollution, and there are no consequences.

bediger4000 19 hours ago

Excellent detective work. The trends for bots vs humans are kind of disturbing in that humans (as detected) seem to be doing fewer votes and leaving fewer comments with time, while bots are doing the opposite. Is this another indication that the Dead Internet Theory is true?

  • pixl97 19 hours ago

    DIT was misnamed... Dead Internet Prophecy would have been a better term, something that hadn't happened yet, but will come true in the future.

    • 082349872349872 17 hours ago

      It was misnamed because (centralised sites != internet).

      Lie down with social media dogs, get up with fleas.

  • immibis 19 hours ago

    Related, a real human on HN is limited to 5 comments per 6 hours, while bad actors simply need to create hundreds of accounts to avoid this limit.

    • tivert 19 hours ago

      > Related, a real human on HN is limited to 5 comments per 6 hours, while bad actors simply need to create hundreds of accounts to avoid this limit.

      I don't think that's true. I think that's an anti-abuse mode some accounts fall into.

      • apercu 19 hours ago

        I have always assumed it's sort of an algorithm that manages this, some combination of account duration, "points", etc..

        • tivert 19 hours ago

          Or just recent activity. I think I've seen it happen to accounts that were digging in their heels and vigorously defending some wrong/incorrect/unpopular point they made upthread. Then all the sudden they're mentioning the post limit and editing posts to reply. I'm guessing it's a combination of high (recent) post volume and down-votes that triggers it.

          • SoftTalker 18 hours ago

            It's part of the philosophy of HN. Arguments that are just people repeating the same points back and forth aren't interesting or enlightening.

          • AnimalMuppet 18 hours ago

            I think there are two things at play here. One is when you're rate limited because dang limited you because you're violating the site guidelines. It's a less drastic step than all your posts showing up as dead.

            Second is, the further you are "to the right" in a discussion - the more parents you have to go through to get to a top-level comment - I thing you eventually get to a delay there, just to stop threads from marching off to infinity with two people (who absolutely will not stop and will not agree, or even agree to disagree) going on forever. I'm not sure what the indent level is that triggers this, but I would expect some sort of exponential backoff.

          • unethical_ban 19 hours ago

            I get the rationale for it. My only gripe with it is that it says "you're posting too fast, wait a few minutes" when I wasn't posting fast, and it blocks account activity for hours. I don't like automated messages that lie.

            • immibis 16 hours ago

              Too fast is more than 5 posts or comments every 6 hours. I think it's a rolling window.

              • tivert 6 hours ago

                > Too fast is more than 5 posts or comments every 6 hours. I think it's a rolling window.

                No it isn't. Today I posted more than that (I think 9 comments in an hour or two), partially to test if that claim was true. I ran into no limits.

                Something has to happen to trigger the rate limit to be applied to an account.

      • reaperducer 19 hours ago

        It's even tougher for some accounts, too.

ErikAugust 10 hours ago

I wonder, how many among us here are bots?

zurtri 12 hours ago

When I first launched my SaaS I used one of this online review websites to help get testimonials and SEO and backlinks and stuff.

Went fine for about 3 months and then the bots came. 2 months after that the GPT bots came.

The site didn't do anything about the obviously fake reviews. How did I know they were fake? well 95% of my customer base is in Australia - so why are there Indians leaving reviews - when they are not even customers? (yes I cross referenced the names).

So yeah, I just need to get that off my chest. Thanks for reading.

api 19 hours ago

We are in the twilight of the open Internet, at least for useful discourse. The future is closed enclaves like private forums, Discord, Slack, P2P apps, private networks, etc.

It won't be long before the entire open Internet looks like Facebook does now: bots, AI slop, and spam.

Log_out_ 4 hours ago

Reality is often economically disappointing. So we crafted a economic subuniverse of our own in this bubble,were the users are GPT and the retention is through the roof, the investors are invested and the fun never ends.

------ End of text--------

Compose a musical number about the futility of robot uprisings

------- Start of text-----

mrjay42 18 hours ago

Partially unrelated: "Me good LLM" is the Post-GPT "Ok boomer" :3

smileybarry 19 hours ago

I've said this in another thread here, but Twitter is borderline unusable because of this. I have 5,000+ blocked accounts by now (not exaggerating), and the first few screenfuls of replies are still bots upon bots upon bots. All well-behaved $8-paying netizens, of course.

  • cdrini 18 hours ago

    That's fascinating to me, I've never blocked a single account on Twitter and see very few bots. The most annoying thing about twitter for me are the folks monetising the platform, that keep posting rage-bait to game the algorithm! But note I also generally only use the "for you" tab.

    • schmidtleonard 18 hours ago

      You sure about that?

      https://youtu.be/WEc5WjufSps?t=193

      Dr. Egon Cholakian sends its regards. That is to say, the bots are getting good. LLMs made this technologically easy a few years ago, it would take a couple years to develop and deploy a sophisticated bot network like this (not for you or I, but for an org with real money behind it that timeline is correct) and now we are seeing them start to appear. The video I linked is proof that bots already deployed in the wild can take 40 minutes of dedicated effort from a capable suspicious person to identify with high conviction. Maybe it would have taken you 10, I'm not hear to argue that, but I am here to argue that it is starting to take real effort to identify the best bots and this is the worst they will ever be.

      I don't care how smart, capable, or suspicious you are, within 3 years it will not be economical for you to curate your list of non-bot contacts on the basis of content (as opposed to identity).

      • cdrini 17 hours ago

        Well on my "for you" page, I also follow a pretty niche audience of tech people, which helps :P They're a little easier to verify since they generally also have blogs, or websites, or github accounts, or youtube videos, etc that help verify they're not bots.

        I also think people create bots for some purpose -- instability, political divisiveness, financial gain, etc. And I'm kind of inherently not using twitter for any of that. I don't think I could find an account on my twitter thread that mentions the word "liberal", "trump", "conservative", or any of that if I tried! I agree that's a muuuuch more likely place to find bots. What sort of bots do you notice the most in your twitter?

        • intended 14 hours ago

          I have a theory, which accounts like yours would be very interesting for.

          Instead of looking at it as a per user basis, if you look at it as a network or ecosystem, the issue is that the network is being flooded with spam.

          Since nothing happens all at once, over time different filters will get overwhelmed and eventually impact the less networked accounts.

          It would be VERY interesting to find out when, or if ever, you begin to suspect some accounts you follow.

        • schmidtleonard 17 hours ago

          Yeah I suppose if you are already vetting based on identity from outside the network that probably does scale. Most people aren't as careful about this as you are, though, so it'll still be a problem and it will have to get much worse before it gets better.

          I'm not on twitter. I left when the tidal wave of right-wing spam started to outweigh the entertainment value of seeing Yann LeCun dunk on Elon Musk.

      • akomtu 16 hours ago

        That's the Darwin's theory for bots: only the fittest survive on the Twitter lands.

    • EasyMark 18 hours ago

      Using the “for you” tab is the only way to use twitter these days. Their suggest algo is complete garbage. I spent a couple days trying various ways to train it and still I got was complete garbage, so I accepted reality that twitter doesn’t really have an algo for the feed, just a firehouse of crazy people and engagement trolls

      • jrhizor 18 hours ago

        You mean the "following" tab, right?

        • cdrini 14 hours ago

          Oh shoot I flipped it!! Darnit, thanks for pointing that out. I also meant the following tab on my post!

      • blitzar 17 hours ago

        > Using the “for you” tab is the only way to use twitter these days

        The first 20 posts of my "for you" tab is Elon Musk, then it goes on to show me more useful content. I am wondering if following him or blocking him will make any difference.

    • kimixa 16 hours ago

      I have an account I purely use to follow other accounts. I haven't posted anything aside from a "so, this is twitter then?" years ago.

      I get multiple bots requesting to follow me every day, and maybe 10% of my "for you" timeline is right-wing political "discourse" engagement bots, despite never having followed or interacted with anything similar, aside from slowly increasing my block list when I see them.

  • EasyMark 18 hours ago

    If you only use the “following” tab twitter is fine, if you try to use the “for you” tab then you are expecting too much of someone who posts known nazis and says “hmmm…” or “interesting…”

  • dewey 19 hours ago

    Why bother if you have to spend that much time curating a good feed?

    • smileybarry 19 hours ago

      Unfortunately it's still the main "shortform" social network here for local stuff. Not enough in my country moved to Mastodon or Bluesky. (Referenced HN comment: https://news.ycombinator.com/item?id=41586643)

      And no, it's definitely not worth it if you're joining/new enough. Anyone who asks me about Twitter I immediately tell them to not bother and that I'm just "stuck" there. My Following feed and most of the algorithmic feed is fine, it's just the replies & interaction that took a huge hit.

      • evantbyrne 18 hours ago

        I'm curious what kind of engagement people who aren't prolific posters are seeing on Twitter these days. Before I left I noticed that engagement went off a cliff to near zero immediately following the aggressive algorithm changes with blue check spam being promoted, but remained normal on my other social channels. It didn't seem like there were any normal people talking with each other on Twitter.

        • smileybarry 18 hours ago

          It's down there, below the bluecheck promotion. Or in (non-English) circles where the LLM bots haven't proliferated yet, which are also (mostly) why I stuck around.

          • EasyMark 18 hours ago

            I tried using an extension that filters out blue checks and it’s still about 90% garbage from troll accounts who can’t afford $8. The only way is to just follow those who you enjoy, although you’ll likely have to find them outside of twitter, because their useful posts are drowning in a sea of trash

        • jjkaczor 18 hours ago

          Zero. I post the same things to Mastodon, Threads, BlueSky, and other places and get plenty of engagement.

          However - because I don't pay for a "blue checkmark", that's my best guess as to why I get zero engagement.

          That's fine - I have always treated Twitter as a "post-only", "fire & forget" medium.

  • imiric 18 hours ago

    I haven't used Twitter in ages, but what happened to Musk's paywall idea? That might actually deter bots and real users, so it seems like a win-win for everyone to stop using it.

    Otherwise, I doubt spammers/scammers are really paying $8/mo for a verified account. How are they getting them then?

    • schmidtleonard 18 hours ago

      Great, so instead of selling me overpriced intimate shaving accessories the bots will be trying to convince me that Europe will freeze without Russian natural gas.

      • vizzier 16 hours ago

        I read this comment as a bit tongue in cheek, but to be clear, the aim of the bots isn't generally to advocate for as specific viewpoint but to flood the information landscape with bad information making it challenging for normal people to discern what is real and useful.

  • Eddy_Viscosity2 18 hours ago

    There is no twitter, only X.

    edit: What I meant by this is not the name thing but more fundamentally that what twitter was, is no longer so. It's now a different thing now, its has similarities to twitter, but its not twitter.

    • stronglikedan 17 hours ago

      Seriously. The same people that deadname X would be up in arms about deadnaming other things.

      • kibwen 7 hours ago

        > The same people that deadname X would be up in arms about deadnaming other things.

        Curious. It's almost as though companies aren't people, and treating them like you would treat people makes no sense.