zurew

Member
  • Content count

    3,404
  • Joined

  • Last visited

Everything posted by zurew

  1. It seem to lack some fundamental physics knowledge or at the very least seem to be confused about it (based on the last reddit post I sent), but the time is probably not far away when they can give it all the necessary physics equations , so that it can get a more fundamental understanding how this world works and it won't be limited to only learn just by observing. All that being said, I agree that it doesn't need to be perfect in order to destroy the current market. It will be 100% used in multiple fields, especially given that you can talk to it and use it to edit (as many times as you want) videos that you feed to it.
  2. Child's case is different on multiple levels. A child can refuse to take action for much more reasons - outside of him/her not hearing the suggestion to do something. The problem with the reason that you gave, is that one - I could give that exact same reason to any problem that the AI cant solve - because "even though it actually knows how to solve this problem, well it didnt pay enough attention to what I asked it to do" The problem with that kind of reasioning is that it is not consistent with how the AI operates. The AI will be much much less likely to "overlook" certain tasks compared to others. If you ask it to generate a mushroom image, the likelihood that it will won't pay sufficient attention to that , will be much less likely than to its negation. The other problem with the reason that you gave is that there are instances where you can ask it multiple times to not do an action, and it continues to do it. So after 3-4 prompts (continously asking to not do something) , it becomes strange to assume that it just overlooked a word.
  3. Well, yeah we are getting closer to that time although they will need to solve these things first: https://www.reddit.com/r/OpenAI/comments/1arrqpz/funny_glitch_with_sora_interesting_how_it_looks/ Whats interesting though, is that Sora generated a lot of negative feeling and comments towards AI in general. A little bit of a social panic of some sort. (155k likes on a hate tweet)
  4. https://www.reddit.com/r/singularity/comments/1avk2hr/sora_style_transfer/
  5. Yeah I think this is what probably will be needed. I have seen some experts talking about combining multiple architectures together - because they think that will be the best approach to AGI and LLM-s alone wont be sufficient. So LLM-s probably will be 1 part of AGI, but there will be more.
  6. You still don't get the depth of the problem. Its not a matter of sometimes I can follow the instructions and sometimes I don't. Its not a cherry pciking problem, its different in kind. You either know the abstract meaning of negation or you don't. If you tell a human to not do x - that human won't do that action , given that he/she understands what that 'x' is. If you do the same thing with an AI, it will still do the action even if it knows what x is. And not just that, but in its response it will give the bullshit memorized take of "Yeah I apologize for my mistake, I won't put x on the image anymore" and then right after that it generates an image that has x on it. I will apply this to a programming problem, because it will illustrate perfectly whats wrong with the saying of "sometimes it can work and sometimes it cant" - If you know programming you will understand this: I can write a function that will ask for an integer input, and after that it will print out that input. Now given that the condition is met correctly ( that I give an integer as an input, that the computer can read) it wouldn't make any sense to say that yeah this function will only work 40% of the time or 30% of the time - no, once all the required conditions are met - It either can perform the function or it cant. Another way to put it is to conflate inductive and deductive arguments. When it comes the deductive arguments the conclusion will always follow from the premises, but when it comes to inductive ones, the conclusion won't necessarily follow 100% of the time. Another way to put is to talk about math proofs. It would make 0 sense to say that this x math proof only works 90% of the time. I already gave a response to this: https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 - this articles breakes down pretty thoroughly the problems with GPT4's reasoning capability and understanding of things.
  7. I disagree that it can be solved by a better context window. The nature of the problem is much deeper than that. Its not a matter of forgetting something , its a matter of not doing it in the firstplace.The examples that I mentioned were ones, where you give it a prompt and it immediately fails to do what the prompt says (not like it fails to maintain a long term condition that you gave it few responses ago) . Its like User: Hey GPT dont do x GPT4: does x.
  8. WIll look forward to GPT5. I will give one more thing that they will eventually need to solve: Right now it seems to be the case, that AI doesn't have an abstract understanding of most things (I already said this), but more specifically, doesn't have an abstract understanding of negation or the negative. The proof for this is the fact, that in a lot of cases, when you tell it to not do something - it will still do it. Yes in some cases it might get it right, but this is a principle problem, where you either have a real understanding of negation or you don't. This includes instances, where you want to create an image and say 'please don't inlcude x on the image' or if you want it to not include a specific thing in its answer. This problem seem to be a tough one , given that you can't just show it a clear pattern what negation is just by using a dataset that has a finite set of negating things. - in fact, I would say the whole category of 'abstract understanding of things' cant be learned by purely just trying to find the right pattern between a finite set of things. - it seem to require a different approach in kind. Because, for instance, if I have a prompt of 'don't generate a mushroom on the image" - I would need to show an infinite number of instances of images that don't include a mushroom on it, and even then the prompts full meaning wouldn't be fully captured. I will grant though, that AI won't necessarily need to have a real abstract understanding of things to do a lot of tasks, but still; eventually - to make it as reliable as possible, some solution will need to be proposed to this problem.
  9. Yeah, pretty impressive. We are probably not very far from AI being able to make sense and extract meaning from videos (outside of just using context clues from a given prompt or outside from reading transcripts)
  10. Its interesting how it can combine videos: https://www.reddit.com/r/OpenAI/comments/1arztj9/sora_can_combine_videos/
  11. I love how you were incapable to engage directly with anything that was said and had to strawman everyone and you had to use one of the worst arguments to make a case for your position. Most of us didn't say that AGI definitely won't come in the next 5-10 years, most of us just tried to point to the fact, that the arguments that are used to make a case for such claims are weak and have a lot of holes in them and some of you have unreasonably high confidence in such claims. If you really want to make the "but experts said this and you reject that" argument, then First show us a survey where there is a high consensus between AI experts on when will AGI emerge (But it has to be such a survey, where AGI has a concise definition, so that those experts have the same definition in mind when they answer the question) Secondly, you will have to make a case for why this time the experts prediciton will come true (because there is a clear inductive counter argument to this survey argument - namely, that most of the AI experts predictions in the past have failed miserably).
  12. Yeah I agree that it can be useful for multiple different things , but I tried to show some of things tthat I and others have recognized regarding its semantic understanding and its reasoning capability. The article I linked shows many examples ,but here is another one with many examples: https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 I recommend to check this one as well: https://amistrongeryet.substack.com/p/gpt-4-capabilities This is also interesting as well:
  13. Yes and it fails answering trivially easy questions that a guy in elementary school could answer. It makes zero sense to say that it has a semantic understanding of things and at the same time it fails giving the right answer for trivial questions. Yes, sometimes it can provide the right answer to more complex questions, but if it would actually have a semantic understanding - it wouldn't fail answering the trivial questions - therefore , I will say it again - it only deals with patterns and doesnt understand the meaning of any thing. Right now you could do this with me: Give me a foreign language that I understand literally nothing about - in terms of meaning of sentences and words - and then give me a question in that foreign language and the right answer below. If I memorize the syntax (meaning, if I can recognize which symbol comes after which other symbol) then I will be able to give the right answer to said question even though I semantically understand nothing about the question nor about the answer - I can just use the memorized patterns. The AI seem to be doing the exact same ,except with a little twist that it can somewhat adapt said memorizedpatterns and if it sees a pattern that is very similar to another pattern that it already ecountered with in its training data, then - in the context of answering questions - it will assume the answer must be the exact same or very similar to it, even though changing one word or adding a , to a question might change its meaning entirely. Here is one example that demonstrates this problem
  14. The stage yellow AI needs to be exceptionally and intelligently prompted? --------- But yeah, I know sometimes it can do good things (if you figure out what prompt can work). But it proves my earlier point about the problem that currently it doesn't really know the semantics of things - it only remembers patterns - and once you change that pattern(in this case the prompt) a little bit (in a way where the meaning is essentially the same), it falls apart and fails to apply the right pattern.
  15. @Yousif - see what @Raze did there? He actually made points and he directly engaged with the question that was asked to you. Once you grow out of your non-dual-rambling-wannabe-guru phase, maybe you will be capable to do that to, but until then - I will stop engaging with you, because its a total waste of everyones time.
  16. @Yousif Yeah as I thought, you have nothing of substance to contribute to any of the conflict - you are here to give platitudes (that everyone knows) and to virtue signal. The problem with what you are doing is that you are derailing the thread and stopping other people from having a substantive conversation.
  17. Yeah this is virtue signaling - literally everyone knows this, but war comes with the killing of innocent lives and what you don't take into account is that being passive sometimes can bring more innocent deaths than engaging in wars - thats why you need to drop the platitudes and come back to real life and try to actually analyze and engage with the situation so that you can come up with the best strategy according to your knowledge that can actually minimize the global suffering and death long term.
  18. @Yousif without virtue signaling and rambling about non-duality can you give an exact, concise plan what should Israel do?
  19. Fell free to suggest something different than eliminating Hamas
  20. @Danioover9000 If you want to engage productively stop posturing and virtue signaling - literally no one cares. Everyone has emotions and feelings around this topic so I dont see how thats engage or contradicts anything that was said. So many new and novel things being said there, good job. "you are biased, therefore I won't directly engage with anything that was said" - a very intelligent and productive way to argue. All of your points are stupid, because it can be used for both sides. I taught you werent in favour of relativising the morality of both sides. Waiting for another of your schizo-rants.
  21. Why would you use the absolute numbers of civilians killed to establish intent? I already gave posts why thats a much worse metric to go by compared to relative risk.
  22. Yeah this is a good metaphore. Does the progress in climbing up trees a good and reliable metric to track the progression of reaching the moon?
  23. Maybe or maybe not. Maybe they will eventually abandon LLM-s because they find a roadblock - we have no idea. Making confident statements and predictions is useless, because even experts are shooting in the dark nad are making wildly different statements about AGI. It seems to be the case that it gets the syntax of things (the rules) but it doesn't really get the semantics (the abstract meaning of things) - this can be demonsrated if you use any gpt or LLM model. And again there is still the problem with- the how you need to connect the domain specific AI-s together. There is also a big problem with self deception as you increase intelligence. If you scale things up a lot, that will make it harder for the AI to introspect and we probably want the AI to have an ability to develop its own self - and a prerequisite to that is the ability to introspect.
  24. Sora is another example of a domain specific AI , but its not clear how we are advancing towards AGI (where you can connect all the domain specific AIs together under one framework that actually works the way we want it to work). Its like we are always pushing back the problem of how we need to connect each pieces together so that AGI can emerge, while pretending that we are making real progression on the problem. Merely creating more and more advanced domain specific AIs wont be sufficient - you need to connect them in a specific way. Its like we use the progress of domain specific AI and mistaken it for the progress towards AGI.