-
Content count
3,351 -
Joined
-
Last visited
Everything posted by zurew
-
https://www.reddit.com/r/singularity/comments/1avk2hr/sora_style_transfer/
-
Yeah I think this is what probably will be needed. I have seen some experts talking about combining multiple architectures together - because they think that will be the best approach to AGI and LLM-s alone wont be sufficient. So LLM-s probably will be 1 part of AGI, but there will be more.
-
You still don't get the depth of the problem. Its not a matter of sometimes I can follow the instructions and sometimes I don't. Its not a cherry pciking problem, its different in kind. You either know the abstract meaning of negation or you don't. If you tell a human to not do x - that human won't do that action , given that he/she understands what that 'x' is. If you do the same thing with an AI, it will still do the action even if it knows what x is. And not just that, but in its response it will give the bullshit memorized take of "Yeah I apologize for my mistake, I won't put x on the image anymore" and then right after that it generates an image that has x on it. I will apply this to a programming problem, because it will illustrate perfectly whats wrong with the saying of "sometimes it can work and sometimes it cant" - If you know programming you will understand this: I can write a function that will ask for an integer input, and after that it will print out that input. Now given that the condition is met correctly ( that I give an integer as an input, that the computer can read) it wouldn't make any sense to say that yeah this function will only work 40% of the time or 30% of the time - no, once all the required conditions are met - It either can perform the function or it cant. Another way to put it is to conflate inductive and deductive arguments. When it comes the deductive arguments the conclusion will always follow from the premises, but when it comes to inductive ones, the conclusion won't necessarily follow 100% of the time. Another way to put is to talk about math proofs. It would make 0 sense to say that this x math proof only works 90% of the time. I already gave a response to this: https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 - this articles breakes down pretty thoroughly the problems with GPT4's reasoning capability and understanding of things.
-
I disagree that it can be solved by a better context window. The nature of the problem is much deeper than that. Its not a matter of forgetting something , its a matter of not doing it in the firstplace.The examples that I mentioned were ones, where you give it a prompt and it immediately fails to do what the prompt says (not like it fails to maintain a long term condition that you gave it few responses ago) . Its like User: Hey GPT dont do x GPT4: does x.
-
WIll look forward to GPT5. I will give one more thing that they will eventually need to solve: Right now it seems to be the case, that AI doesn't have an abstract understanding of most things (I already said this), but more specifically, doesn't have an abstract understanding of negation or the negative. The proof for this is the fact, that in a lot of cases, when you tell it to not do something - it will still do it. Yes in some cases it might get it right, but this is a principle problem, where you either have a real understanding of negation or you don't. This includes instances, where you want to create an image and say 'please don't inlcude x on the image' or if you want it to not include a specific thing in its answer. This problem seem to be a tough one , given that you can't just show it a clear pattern what negation is just by using a dataset that has a finite set of negating things. - in fact, I would say the whole category of 'abstract understanding of things' cant be learned by purely just trying to find the right pattern between a finite set of things. - it seem to require a different approach in kind. Because, for instance, if I have a prompt of 'don't generate a mushroom on the image" - I would need to show an infinite number of instances of images that don't include a mushroom on it, and even then the prompts full meaning wouldn't be fully captured. I will grant though, that AI won't necessarily need to have a real abstract understanding of things to do a lot of tasks, but still; eventually - to make it as reliable as possible, some solution will need to be proposed to this problem.
-
Yeah, pretty impressive. We are probably not very far from AI being able to make sense and extract meaning from videos (outside of just using context clues from a given prompt or outside from reading transcripts)
-
Its interesting how it can combine videos: https://www.reddit.com/r/OpenAI/comments/1arztj9/sora_can_combine_videos/
-
I love how you were incapable to engage directly with anything that was said and had to strawman everyone and you had to use one of the worst arguments to make a case for your position. Most of us didn't say that AGI definitely won't come in the next 5-10 years, most of us just tried to point to the fact, that the arguments that are used to make a case for such claims are weak and have a lot of holes in them and some of you have unreasonably high confidence in such claims. If you really want to make the "but experts said this and you reject that" argument, then First show us a survey where there is a high consensus between AI experts on when will AGI emerge (But it has to be such a survey, where AGI has a concise definition, so that those experts have the same definition in mind when they answer the question) Secondly, you will have to make a case for why this time the experts prediciton will come true (because there is a clear inductive counter argument to this survey argument - namely, that most of the AI experts predictions in the past have failed miserably).
-
Yeah I agree that it can be useful for multiple different things , but I tried to show some of things tthat I and others have recognized regarding its semantic understanding and its reasoning capability. The article I linked shows many examples ,but here is another one with many examples: https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 I recommend to check this one as well: https://amistrongeryet.substack.com/p/gpt-4-capabilities This is also interesting as well:
-
Yes and it fails answering trivially easy questions that a guy in elementary school could answer. It makes zero sense to say that it has a semantic understanding of things and at the same time it fails giving the right answer for trivial questions. Yes, sometimes it can provide the right answer to more complex questions, but if it would actually have a semantic understanding - it wouldn't fail answering the trivial questions - therefore , I will say it again - it only deals with patterns and doesnt understand the meaning of any thing. Right now you could do this with me: Give me a foreign language that I understand literally nothing about - in terms of meaning of sentences and words - and then give me a question in that foreign language and the right answer below. If I memorize the syntax (meaning, if I can recognize which symbol comes after which other symbol) then I will be able to give the right answer to said question even though I semantically understand nothing about the question nor about the answer - I can just use the memorized patterns. The AI seem to be doing the exact same ,except with a little twist that it can somewhat adapt said memorizedpatterns and if it sees a pattern that is very similar to another pattern that it already ecountered with in its training data, then - in the context of answering questions - it will assume the answer must be the exact same or very similar to it, even though changing one word or adding a , to a question might change its meaning entirely. Here is one example that demonstrates this problem
-
The stage yellow AI needs to be exceptionally and intelligently prompted? --------- But yeah, I know sometimes it can do good things (if you figure out what prompt can work). But it proves my earlier point about the problem that currently it doesn't really know the semantics of things - it only remembers patterns - and once you change that pattern(in this case the prompt) a little bit (in a way where the meaning is essentially the same), it falls apart and fails to apply the right pattern.
-
lol
-
@Yousif - see what @Raze did there? He actually made points and he directly engaged with the question that was asked to you. Once you grow out of your non-dual-rambling-wannabe-guru phase, maybe you will be capable to do that to, but until then - I will stop engaging with you, because its a total waste of everyones time.
-
@Yousif Yeah as I thought, you have nothing of substance to contribute to any of the conflict - you are here to give platitudes (that everyone knows) and to virtue signal. The problem with what you are doing is that you are derailing the thread and stopping other people from having a substantive conversation.
-
Yeah this is virtue signaling - literally everyone knows this, but war comes with the killing of innocent lives and what you don't take into account is that being passive sometimes can bring more innocent deaths than engaging in wars - thats why you need to drop the platitudes and come back to real life and try to actually analyze and engage with the situation so that you can come up with the best strategy according to your knowledge that can actually minimize the global suffering and death long term.
-
@Yousif without virtue signaling and rambling about non-duality can you give an exact, concise plan what should Israel do?
-
Fell free to suggest something different than eliminating Hamas
-
@Danioover9000 If you want to engage productively stop posturing and virtue signaling - literally no one cares. Everyone has emotions and feelings around this topic so I dont see how thats engage or contradicts anything that was said. So many new and novel things being said there, good job. "you are biased, therefore I won't directly engage with anything that was said" - a very intelligent and productive way to argue. All of your points are stupid, because it can be used for both sides. I taught you werent in favour of relativising the morality of both sides. Waiting for another of your schizo-rants.
-
Why would you use the absolute numbers of civilians killed to establish intent? I already gave posts why thats a much worse metric to go by compared to relative risk.
-
Yeah this is a good metaphore. Does the progress in climbing up trees a good and reliable metric to track the progression of reaching the moon?
-
Maybe or maybe not. Maybe they will eventually abandon LLM-s because they find a roadblock - we have no idea. Making confident statements and predictions is useless, because even experts are shooting in the dark nad are making wildly different statements about AGI. It seems to be the case that it gets the syntax of things (the rules) but it doesn't really get the semantics (the abstract meaning of things) - this can be demonsrated if you use any gpt or LLM model. And again there is still the problem with- the how you need to connect the domain specific AI-s together. There is also a big problem with self deception as you increase intelligence. If you scale things up a lot, that will make it harder for the AI to introspect and we probably want the AI to have an ability to develop its own self - and a prerequisite to that is the ability to introspect.
-
Sora is another example of a domain specific AI , but its not clear how we are advancing towards AGI (where you can connect all the domain specific AIs together under one framework that actually works the way we want it to work). Its like we are always pushing back the problem of how we need to connect each pieces together so that AGI can emerge, while pretending that we are making real progression on the problem. Merely creating more and more advanced domain specific AIs wont be sufficient - you need to connect them in a specific way. Its like we use the progress of domain specific AI and mistaken it for the progress towards AGI.
-
Yeah one of the strongest argument to support this is the fact that when Hamas first arrived on oct 7, there were no Israeli soldiers near them (most of them were at home on a holiday). There were places where Israeli soldiers only arrived 4-6 or more hours after the attack, so at those places Hamas could literally do freely whatever they wanted and they still killed civilians (so , it would be nearly impossible to talk around how they werent intentionally targeting civilians). Also what possible reasoning could be given attacking people at a music festival? Btw im surprised this still considered contentious for some people here.
-
I agree. The assumption that this speed of development will stay being on an exponentional curve is big and I havent heared a strong argument yet that would properly ground it. There also seems to be an assumption, that if we just scale up these models and if we use more computing power - eventually general intelligence will just emerge. But, there are certain problems that can't be simply solved by more computing power - for example the relevance realization and other moral and philosophical problems. It feels similar to saying that "well with more technological development we will make something so that we can travel faster than light" - well noo, its not a matter of lack of technological development , its a problem with laws of physics and until you can contradict that, you can have as much technological development as you want, you will have certain limitations that you can't cross.
-
Yes, but ideally for the reasons I mentioned before , we should focus on relative risk (the blue) rather than on both or only on civilian casualty ratio. Now of course, none of these metrics are absolute. The more variables we add, the bigger picture we can get about the war, however the issue is with the weighing of all variables. For instance, in my view, relative risk has much more weight and is much more informative assessing genocidal intent, than damage done to buildings, but of course that damage shouldn't be ignored. There are still of course ways to try to establish genocidal intent, but it will be hard, because you will have to explain how can you get such high relative risk, when you have genocidal intent in mind (and even there are other contradictory factors that you will have to blast through). Now, do you need to establish genocidal intent on Israel's part to make criticism towards Israel? No, of course not, and a lot of people here in this thread and in other places as well seem to forget that you don't have to die on this hill (that you have to prove genocideal intent). You can defend the Palestinian side without needing to use weak and bad arguments to prove genocidal intent. There are a lot of others crisicisms and arguments you can make against Israel and such arguments will be much easier to defend and to establish (for example damage done to buildings or you can pick any other thing). Regardless what side you are on (anyone who is reading this) - people need to stop using civilian casualty ratio to prove genocidal intent, because relative risk is just more reliable for that.
