zurew

Member
  • Content count

    3,127
  • Joined

  • Last visited

Everything posted by zurew

  1. Andrew became famous in 2022 before that he was an unknown nobody. For the guys "ohh he has been just playing a character all along" really? he has been playing a character since 2019 and maybe even before that (why would anyone play a character when no one is watching?). His character has been consistent since 2019 thats for sure.
  2. Yeah you definitely need to lay down a lot of ground first to make your specific task and question across. I agree its still very limited.
  3. So given our current political structure and given how a normal person engage with politics, given how many bad actors in the world in general, you think that no regulation would open up more good than bad? I don't think you live in this world. This AI can describe in detail how to make a fucking bomb. You don't even necessarily need bad intentions, you just need certain incentives to be more competitive (which can mean to lie and distort more) and that alone is enough to do enormous amount of damage.
  4. I can make it to answer any questions. They are not "sabotaging" it, they want to make sure that it is not used in a bad way and they want to make sure, that it is not weaponised or that it is super hard to weaponise it. You are viewing this in a very simplistic way, have you ever thought about how many ways an AI like this could be used in a harmful way? - plot twist you don't have to, because there is a company(spending millions on safety regulations) with a bunch of smart people ,who are doing that for you. What high quality and important topics you can't talk with this AI?
  5. I didn't meant that you can give 300 pages worth of information to it, I meant that it already has a bunch of books in its memory and can already answer questions regarding those books.
  6. Noone is destroying anything here, why do you need to overdramatize and emotionally load everything? Again, there are ways to make it to answer your questions. Because the "truth" to the vast majority of people just means "information / source that agrees with my biases". No one is blocked from serious intellectual conversations, they are blocked from being dicks. You are conveniently not engaging with any of these thoughts: Lets think for a second and use basic reasoning here. Why do you think thats the case?
  7. That is true. I am just surprised that it can parse through and solve some reasoning tasks and it can explain why it did what it did (just as with programming). I would say that if enough examples (deductive and inductive reasoning) will be given to it, it might be able to do most of our simpler reasioning tasks and that would be huge and it wouldn't need any abstractive understanding of logic (but of course it would be better) ,although the big possible problem with that "solution" is that it might just be good at solving specific reasoning tasks and won't be able to apply that to other reasoning tasks. ChatGPT can already do that , although of course it has its own flaws, but yeah, it can present information in table format (or it can digest that information). It can answer questions for you regarding a given text or book. That being said it is definitely not reliable for scientific stuff, because it can say a lot of things that are not factual or things that are unscientific.
  8. it does the same with right and left wing stuff, I have tried it myself. This is not a political chatbot, and there are ways to make it answer your questions. Of course you would say that, but you are completely ignoring this: It logically follows - if people are not allowed to say any shit they want, then an AI won't be allowed to say whatever shit it wants. For your comment about "controversy where the truth lies" and there is where a lot of conflict and war lies as well, so to suggest that people are mature enough to engage with hard questions in a responsible way, is a very naive and deluded notion. We are very far from the point where people are conscious and educated enough to engage with hard topics without being ideologically driven or being biased or without having unconcious and bad motivations. 99.999% of people who say "lets just find out the truth" are ideologically captured, and have motivated reasoning, and they are mostly the biggest reason why moderation is needed.
  9. If you would have tried ChatGPT yourself, you would have found out under 30 seconds, that most of the examples that you provided are misleading and simply not true. You can ask positive things about fossil fuels and you can ask positive things about Donald Trump as well. This is not about AI, this about free speech, if you have disagreements about that, thats fine, but thats a different topic. These companies have their own responsibilities (because an AI can very easily be misused in a 1000 different ways) and they would be mostly responsible for all that. This is a much more complex topic than presented here.
  10. Maybe with Google's future chatbot it will be different, because none of the current AIs were optimized to be factually correct. I think this is only half right. Yes it can make some really dumb mistakes that no other human would make, but I have tested (ofc just on superficial level with just a few examples) its ability for deductive reasoning (giving it examples like this: "If all dogs have 4 legs, and we know, that a rottweiler is a dog, then how many legs a rottweiler has?") and it always knew the right answer. I gave it a different example as well, where I gave it a factually incorrect information and first it recognized that it is factually incorrect, then I asked it, to "regardless of how factual the given information is, please use deductive reasoning to find the conclusion", and its answer was correct. After that I tried to trick it with an example like this: "All swans are white. Jane is a human and white. whats the conclusion?" and it gave an answer like this: "The conclusion cannot be drawn that Jane is a swan based on the information given because the statement only says that all swans are white, not that all white things are swans." I gave it similar tasks and examples to test its inductive reasoning and it always knew the right conclusion. So given that It has some knowledge about entities (it can differentiate concepts from each other and has some information about them [showing it words like dog in a 1000 different context, so it can recognize the "dog" entity and could have some information about it]) it should be able to use its deductive and inductive reasoning to find the right conclusions. I wouldn't claim that it has an internal understanding of things, because then it wouldn't make any dumb reasoning mistakes, but at the same time, I wouldn't claim either, that it has nothing, because it doesn't seem to be 100% random. If it has more than a 50% probability to be right given a reasoning task (ofc I don't know if this claim is true or not, i just base it on my super limited testing and interaction with it, I just assume this claim to be true), then it has something going on, regardless if its simulated or not.
  11. Yeah, I think you are right. Your advice about sex positions is much better and a lot can be done in bed that can replace dick size for sure. @Someone here Develop your game in bed, and find girls that are okay with your penis size. You have an average dick size, so you shouldn't be worried about it.
  12. Don't take this advice for granted, because it might be total bullshit, so do your own research on this topic. What about using a penis pump right before sex? Pumping more blood into your dick should make it temporarily bigger (at least, that what I would expect from it). But again, if you want to try this out, do your own research on this topic and keep this in mind:
  13. Not so fast. We don't know yet how far we can push the current AI tech and we don't know what the limitations are. Sure we might reach a point in the future where almost all jobs will be replaced by AI, but we don't know when that time will come and there is no guarantee that it will happen in the next 1 or two decades. Replacing most human workers overall (where there is no need for any human attention or knowledge or free will or thinking) seems an incredibly hard task. We still have a very poor understanding of our own intelligence, so it seems very unlikely to intentionally create(without luck) an intelligence that has all the main capacities that a human has. Sure it might be the case, that we could achieve it by randomly experimenting with stuff, but that is still hitting shots in the dark without knowing what we are doing. It might be the case, that a different kind of structure will be needed to build a superintelligence and a multi-layer transformer network (that chatgpt was trained on) is too limited in a structural way that can't be overcomed by adding more computational power to it.
  14. I think since they made ChatGPT public, they have made some updates on it.
  15. Maybe copywriters and maybe some others but mostly it will automate some parts of certain jobs. Jordan Peterson is doing a big hype in that video, I would suggest to almost never take at face value any information coming from people that are not experts in the field , they are talking about (even experts can say sometimes dumb or outrageous shit). You should be very sceptical about predictions regarding to AI , because its just so unreliable and everything is changing so fast, that there is no way that anyone can make such reasonable predictions and assumptions that will be true with a high probability.
  16. Yeah, I have seen this idea (to combinate chatgpt with midjourney ai) and I think this combo is very fascinating. Although midjourney AI is not the best to describe/illustrate long texts with images, on the other hand, this combo can be used to show or to illustrate complex , hard concepts that would be hard to explain in words alone otherwise.
  17. In most of the cases, unfortunately victims can't meet those proof standards. You don't know, whether they have enough evidence or not. There were allegedly leaked audio files and chat logs, those combined with Tate's own statements (bunch of videos, where he self snitches) + there is a copy of his past website where he sold a course about how to manipulate and use women for your own purposes and all of that combined with other victim statements might be enough.
  18. You call it "throwing away energy lol", because you don't value the things that some people value, but your value is just as subjective as anyone elses. If your definition of smart is to never do anything for fun, or for pleasure or to release stress, then I guess you can call it dumb.
  19. Fun could be very important to people, pleasure can be important to people, releasing stress can be important to people. Obviously people are doing this for a specific purpose, once that purpose is met you can't call it dumb or a waste of time anymore.
  20. Thats okay, but this claim is totally different from your starting implication,that masturbation is actually objectively bad. For some, semen retention is a net negative. Thats up to the individual to decide what one wants to use to release stress and why.
  21. Ejaculation will serve whatever purpose one want to give to it, there is no universal purpose here. You can do it to feel pleasure, or to release stress etc. If you recognize that your case is not universal, then why do you try to make arguments using your special case to say that masturbation is bad?
  22. this: Or just simply give a copy to people who you think are qualified enough to give a quality analysis or response.
  23. Things is that you don't really know if its true what he said, or if its 100% true without any streches. Is it a possibility? Sure, but we need more than just one source to confirm his claims, because the source that "leaked" this, is a source that is known for being anti mainstream. If we all are searching for the truth, then we shouldn't take any single source as 100% true, because that just shows an incredible bias and thats the opposite of trying to be objective. Even if taken all at face value, this still isn't a big deal. If they actually did gain of function research in an illegal way - sure sue them. But the implications are not that big from the currently leaked information.
  24. The best lie is a half lie, because that makes it seem much more legit.