-
Content count
3,132 -
Joined
-
Last visited
Everything posted by zurew
-
What do you mean by "weak"? Do you mean that its not applicable to most people who would be considered as a conspiracy theorist or do you mean that they are only applicable to a small portion of people who would be considered conspiracy theorist or do you mean something different?
-
The big inferences that are drawn from this is just sad, especially given that how much confidence these people have in their grand bullshit narratives. Conservatives pretending they care about the truth at all times, while not being able to stomach the AI's comments about pedophilia and while most of them believe in unsubstantiated conspiracies - is the funniest and most ironic thing in the world.
-
I don't think that they updated it before I tried it ( At least I havent seen any evidence of that ). What I think is going on, is that there is either a cherrypicking of these responses ( you give it the same question over and over again until it gives you an answer that you can use to farm social credit on twitter ) | or you give the AI your question one time, and from that you make a big inference that it can't do certain things (specifically won't argue against / or criticize certain things).
-
Thats really cool and useful
-
https://www.reddit.com/r/singularity/s/FOkfPvWzD5
-
@Raze again the brainrot on twitter is priceless. I tried it and literally first try it listed the problems with communism
-
Thats roughly my understanding of these words as well.
-
Yes, thats what im saying as well. By moral load, I basically meant negative connotation
-
The word hypocrite is much more morally loaded than the word inconsistent. In fact I would say, one main difference between the two is that the word hypocrite cannot be used without any moral load, but the word inconsistent can be.
-
One main reason why people believe in them , because they provide some explanation to a set of unexplained facts that would be complex or hard to understand or to disentangle without it. One other reason (this is connected to the first one) is that people don't like being confused and people don't like not being sure about things. The state of uncertainty about things is hard to handle and stressful to maintain - therefore they use conspiracies to get rid of said uncertainty.
-
zurew replied to thenondualtankie's topic in Society, Politics, Government, Environment, Current Events
Not that easy to debunk. https://en.wikipedia.org/wiki/Poisoning_of_Alexei_Navalny In 2020 he was poisioned, but he survived it and he accused Putin for poisioning him. - that alone counts in favour of Leo's hypothesis. -
The AIs original reaction was perfect and that shouldnt even be anything controversial - thats like literally step 1 in this discourse. All of these morally outraged people are incapable to have a substantial discussion about this topic. Most of them would rather give brave takes like "sexually abusing a child is bad" as if anyone would disagree with that and they would rather virtue signal and farm social credit. The irony is that this very outrage is what blocks society from actually finding solutions to effectively reduce child abuse.
-
"Whaat the election wasnt stolen from Trump?!? - fuck this AI it doesnt know the truth"
-
Im dead 😂😂😂😂😂😂😂😂😂
-
@Raze Thats the same bullshit conservatives cried about when it came to chatgpt. None of these people who shared it and hardcore outraged about it have tried the prompts themselves. It literally took me one try and it generated an argument for having children.
-
This whole post is a level 0 naive techno-optimist take.
-
I have read somewhere that this is a sort of safety-check on AI, because the standard model would produce racist images (but this might be total bullshit). All I know is that I have seen some wild shit generated with some of these models. Like:
-
@thenondualtankie At this point I don't know what your argument actually is. You are selectively engaging with my replies and you seem to not build up to any conclusion - just randomly making points. My whole argument's goal was to showcase that LLM-s are currently bad at reasoning and have a poor understanding of things and mostly just regurgitating things. The argument isn't that AI will never improve or that AGI is impossible or that AGI is necessarily far away ( Im purposefully staying away from making predicitons). I linked you two articles that have dozens of reasoning tasks that gpt-4 fails to solve. I have also linked you examples that showcases how (even in a programming competition's case) gpt-4 was able to solve 10/10 (because that data was contained in its training data) ,and when they gave it a new set of competition questions it scored 0/10. The same thing went down with a certain reasoning task, where it memorized the answer for one and if you tweaked the question a little bit, it immediately failed to figure out the right answer. There are other examples of this and you can read on reddit from other users who talk about changing just one word and GPT4 immediately fails to solve the problem. I shared with you all those things and you haven't engaged with any of them. So the question is: what is your response to all of that? https://medium.com/@konstantine_45825/gpt-4-cant-reason-addendum-ed79d8452d44 - this is another article from the same author as the past one, however the difference is that he ran the same tests on gpt-4's updated version - and the result was the same. GPT-4's reasoning capability didn't improve. Now regarding specifically your question - there are examples of that in the article. I will share the direct quotes so that you can use ctrl + f to find where they are in the article. another one from the same article
-
This is a much much deeper and more nuanced topic, than how it is usually phrased and how most non-philosophers try to debate/talk about it. There is a lot of philosophy knowledge that is needed to be able to properly understand and ground most of the arguments. If you are interested I would suggest you to dive deep into sources like: https://iep.utm.edu/foreknow/#H2 https://iep.utm.edu/freewill/ https://plato.stanford.edu/entries/determinism-causal/ https://plato.stanford.edu/entries/compatibilism/ I like Robert Sapolsky as a biologist, but I think he is pretty much out of his depth here, when it comes to philosophy.
-
It seem to lack some fundamental physics knowledge or at the very least seem to be confused about it (based on the last reddit post I sent), but the time is probably not far away when they can give it all the necessary physics equations , so that it can get a more fundamental understanding how this world works and it won't be limited to only learn just by observing. All that being said, I agree that it doesn't need to be perfect in order to destroy the current market. It will be 100% used in multiple fields, especially given that you can talk to it and use it to edit (as many times as you want) videos that you feed to it.
-
Child's case is different on multiple levels. A child can refuse to take action for much more reasons - outside of him/her not hearing the suggestion to do something. The problem with the reason that you gave, is that one - I could give that exact same reason to any problem that the AI cant solve - because "even though it actually knows how to solve this problem, well it didnt pay enough attention to what I asked it to do" The problem with that kind of reasioning is that it is not consistent with how the AI operates. The AI will be much much less likely to "overlook" certain tasks compared to others. If you ask it to generate a mushroom image, the likelihood that it will won't pay sufficient attention to that , will be much less likely than to its negation. The other problem with the reason that you gave is that there are instances where you can ask it multiple times to not do an action, and it continues to do it. So after 3-4 prompts (continously asking to not do something) , it becomes strange to assume that it just overlooked a word.
-
Well, yeah we are getting closer to that time although they will need to solve these things first: https://www.reddit.com/r/OpenAI/comments/1arrqpz/funny_glitch_with_sora_interesting_how_it_looks/ Whats interesting though, is that Sora generated a lot of negative feeling and comments towards AI in general. A little bit of a social panic of some sort. (155k likes on a hate tweet)
-
https://www.reddit.com/r/singularity/comments/1avk2hr/sora_style_transfer/
-
Yeah I think this is what probably will be needed. I have seen some experts talking about combining multiple architectures together - because they think that will be the best approach to AGI and LLM-s alone wont be sufficient. So LLM-s probably will be 1 part of AGI, but there will be more.
-
You still don't get the depth of the problem. Its not a matter of sometimes I can follow the instructions and sometimes I don't. Its not a cherry pciking problem, its different in kind. You either know the abstract meaning of negation or you don't. If you tell a human to not do x - that human won't do that action , given that he/she understands what that 'x' is. If you do the same thing with an AI, it will still do the action even if it knows what x is. And not just that, but in its response it will give the bullshit memorized take of "Yeah I apologize for my mistake, I won't put x on the image anymore" and then right after that it generates an image that has x on it. I will apply this to a programming problem, because it will illustrate perfectly whats wrong with the saying of "sometimes it can work and sometimes it cant" - If you know programming you will understand this: I can write a function that will ask for an integer input, and after that it will print out that input. Now given that the condition is met correctly ( that I give an integer as an input, that the computer can read) it wouldn't make any sense to say that yeah this function will only work 40% of the time or 30% of the time - no, once all the required conditions are met - It either can perform the function or it cant. Another way to put it is to conflate inductive and deductive arguments. When it comes the deductive arguments the conclusion will always follow from the premises, but when it comes to inductive ones, the conclusion won't necessarily follow 100% of the time. Another way to put is to talk about math proofs. It would make 0 sense to say that this x math proof only works 90% of the time. I already gave a response to this: https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 - this articles breakes down pretty thoroughly the problems with GPT4's reasoning capability and understanding of things.