• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: April 21st, 2024

help-circle



  • You’ve got a bunch of nutjobs that will turn that phrasing into a white genocide conversation is the problem.

    The second part of that is that genocide is a subjective term due to classification of ethnic groups being subjective.

    Honestly this well encapsulates the problem I tend to have aligning on goals with other progressives and some liberals. Every time folks try to simplify something as complex as genocide down to a yes or no question it means they are already invalidating the majority of positions and forcing a conversation of agree with me or call me wrong. That isn’t how it works, that isn’t how discussion and debate work. Forcing people into Yes/No thinking doesn’t lead to progress, asking for people to think critically does.




  • Genocide is a term that is both over and under used. There are currently about six genocides ongoing. I don’t see the point in trying to call someone out on it because no one is actually doing anything for or against it outside of a very small number of people.

    If someone asks me if I’m anti genocide I assume they mean something they specifically consider a genocide and they are trying to use this as bait to get me to out myself in some way. They don’t actually expect I’m personally participating or countering it in any way.

    Trans rights also is a loaded term now because there are a LOT of individual rights Trans people are needing to fight for all in parallel. It’s better to be specific.

    Sure someone who says they are against trans people is awful, but I find folks set the bar in different places and use that to start an argument. The easiest example is, what age should someone be allowed to transition which is an intensely challenging question to answer even on a medical level.





  • The point of my second statement is that if you made an AI that stores and retrieves phone numbers that the model could reasonable use phone number chunks in its random number generation. A phone number can normally be broken into 3 to 6 chunks of 1 to 5 numbers which is reasonable sizes to tokenize. If you then asked it for a random number I think it is reasonable that it would be as likely if not more likely to use the data from the phone number list as it would to use the core 0 to 9 tokenized number list unless you specifically tried to split the two.

    This is a WhatsApp AI so I think asking it for Tim’s number is a use case they trained on. It needs to be a phone book. My guess is they said that list A is a list of public numbers for training things like what a phone number looks like, and list B is a list of private user numbers. Now while a random number could be a random string of numbers it could also be that the LLM is too likely to pull a combination that is actually a real number.

    So is this a case where it randomly pulled together 11 digits that magically hit the roughly 1 in in 100 chance that a random string of numbers shaped like a UK phone number would be a number of a user. Is it a case where it pulled from a public combo list of 4 tokens and randomly reformed a real number that was both public and private? That seems more likely to me. We probably won’t ever get to know.

    If I was making this AI chat bot I would have it check against the most critical data I have for privacy before it shared it as a random number though. WhatsApp phone numbers are its users IDs. Even if it truly randomly generates one it should verify that it is a private number and not output it as it showed it could do when questioned where the number came from.



  • I get it, but we should as a community try to be better than that.

    AI won’t fail. It already is past the point where failing or being a fad was an option. Even if we wanted to go backwards, the steps that were taken to get us to where we are with AI have burned the bridges. We won’t get 2014 quality search engines back. We can’t unshitify the internet.


  • That AI is the one you make or at least host. No one is going to host an online AI for you that is 100% ethical because that isnt profitable and it is very expensive.

    When you vilianize AI you normalize AI use as being bad. The end result is not people stopping use of AI it is people being more okay with using less ethical AI. You can see this with folks driving SUVs and big trucks. They intentionally pick awful choices because the fatigue of being wrong for driving a car makes them just accept that it doesn’t matter.

    It feels dumb, it is dumb, but is what happens.