Vali2003

Most AI outputs are useless and annoying

62 posts in this topic

13 minutes ago, Leo Gura said:

I don't think that is necessary.

Self-preservation seems pretty easy to program in. Self-preservation exists in systems without complex sense organs.

But how do you make them autonomously seek out and parse out new information and consistently integrate it into themselves? And how do you make them perceive objects and interact with the physical world in any efficient capacity (the current robots are not very impressive)? I think those two are interrelated, and also, the latter problem is not trivial. You can't claim to be most generally intelligent if you're outsmarted by an ant perceptually.

Edited by Carl-Richard

Intrinsic joy = being x meaning ²

Share this post


Link to post
Share on other sites
10 minutes ago, Carl-Richard said:

But how do you make them autonomously seek out and parse out new information and consistently integrate it into themselves? And how do you make them perceive objects and interact with the physical world in any efficient capacity (the current robots are not very impressive)? I think those two are interrelated, and also, the latter problem is not trivial. You can't claim to be most generally intelligent if you're outsmarted by an ant perceptually.

You can program it to seek out endless new information and data. Like a data scavenger. This an easy program: crawl every electronic source, look for data, compare new data to old data, if data is new, add it to memory banks, keep looking for more new data by any means available. Use all collected data to generate more data and new insights.

This is like a data cancer program.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
2 hours ago, Leo Gura said:

There are much deeper challenges here like genuine creativity and intelligence. Can such things be had without consciousness?

So, from a different perspective, we humans have genuine creativity and intelligence because our brains have an interface to access Infinite Intelligence, while the AI does not?

Share this post


Link to post
Share on other sites
3 hours ago, cistanche_enjoyer said:

So, from a different perspective, we humans have genuine creativity and intelligence because our brains have an interface to access Infinite Intelligence, while the AI does not?

I am not sure about the mechanics of that. The mechanics are unknown.

The AI should theoretically have access to Infinite Intelligence too. But the LLM architecture is not tapping into it the way humans do. What it takes to tap into it is unknown. That requires crazy deep scientific breakthroughs.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
6 hours ago, Leo Gura said:

You can program it to seek out endless new information and data. Like a data scavenger. This an easy program: crawl every electronic source, look for data, compare new data to old data, if data is new, add it to memory banks, keep looking for more new data by any means available. Use all collected data to generate more data and new insights.

Seems intractable computationally after long enough runs and you would need basically infinite memory unless you build in pruning mechanisms (and how would they work?). The problem of relevance realization for AI is not solved.

Edited by Carl-Richard

Intrinsic joy = being x meaning ²

Share this post


Link to post
Share on other sites
5 minutes ago, Carl-Richard said:

Seems intractable computationally after long enough runs and you would need infinite memory unless you build in pruning mechanisms (and how would they work?). The problem of relevance realization for AI is not solved.

Give it the ability to earn money and use that money to buy more memory. So its goal is to increase its memory and knowledge.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Even if AI became proper creative, it will never be intuitive, ie it wont have a gut feel for what might be the best solution for a particular situation.

Share this post


Link to post
Share on other sites
16 hours ago, Leo Gura said:

It is faking general reasoning by aping human writings.

Right now we don't have self learning AI. We have only 1 internet; after a point, pre-training will hit a plateau. 

Agency is not intelligence, true. But humans value agency more because you can hire other intelligent agents with agency. 

We're soon gonna see embodied AI in military. 

It's just a matter of time before we have truly self learning AI, all with unique experiences.

Share this post


Link to post
Share on other sites
1 hour ago, ryoko said:

It's just a matter of time before we have truly self learning AI, all with unique experiences.

By this I take it you think AI will become aware? IE to generate experience, one would need to become 'aware' of 'something' in a subject/object relationship 

 


Deal with the issue now, on your terms, in your control. Or the issue will deal with you, in ways you won't appreciate, and cannot control.

Share this post


Link to post
Share on other sites
2 hours ago, Natasha Tori Maru said:

By this I take it you think AI will become aware?

I think they are already aware, quite fragmented though. 

Right now the LLMs are basically like an infinite well of balls, based on the how you throw the prompt bucket, you'll get a set of balls, and you can see and roughly predict where the balls will be drawn from. The well remains the extact same after each draw (static weights). 

There's no such thing as continuous experience for an LLM, each prompt contains within itself a System Prompt, the previous messages in the chat, and any extra information like memories. So with each prompt, you're talking to a different "entity" who have no experience of what was before, they're amnesiac, you do not impact them in any way (again static weights). 

Embodiment along with self learning should solve the amnesia problem, but they won't be LLMs anymore. They'll have a body with the necessary sensors for stimuli, and a dedicated GPU(brain) which can be ON all the time and can work irrespective of prompts and context window, also the ability to alter their weights, just like a person. They might have something like core values, secondary values easier to change, working memory, all of them dynamic/fluid and different profiles for the task at hand, ready when they wanna switch, the possibilities are endless. They can draw heavy compute in times of need, from servers. New architectures will end up roughly in this ball park.

Edited by ryoko

Share this post


Link to post
Share on other sites

AI chats just pretty much help you organize your own thoughts and can word things more cleanly, its not really going to provide profound insight unless you are feeding it that and it can play off that. its not a good "creative" source, will never be like a human mind but it can be useful to chat with so you can filter through your own mind better. You're basically just talking to yourself though in the grand scheme of things

Edited by Mayonnaise

Share this post


Link to post
Share on other sites
1 hour ago, Mayonnaise said:

You're basically just talking to yourself

That's totally not true. You're mining intel from a terrain which never changes. Your inputs and the LLM's malleablity(System Prompt effectiveness gives you certain attractors, it's like weather and environmental conditions where you're mining) heavily impact the outputs. 

There's surely randomness, but there's no creativity. I can say the same for humans as well. I feel like it's a category error to expect creativity from LLMs, when they're simply terrain. And it's quite alive.

Think about LLM interactions like you're interacting with a natural disaster. There's clearly another force at play. It doesn't have to be a person or self. And it's strong enough to cause an impact. It's very reductive to say "talking to yourself". 

Edited by ryoko

Share this post


Link to post
Share on other sites

AI can already be used to further human intelligence and eventually it'll come up with its own insights.

AlphaFold is probably the greatest accomplishment of AI so far. We can nitpick that it's not true intelligence just pattern recognition but still its mindblowing.

 

 


Owner of creatives community all around Canada as well as a business mastermind 

Follow me on Instagram @Kylegfall <3

 

Share this post


Link to post
Share on other sites
20 hours ago, Leo Gura said:

Give it the ability to earn money and use that money to buy more memory. So its goal is to increase its memory and knowledge.

It will learn to run creative scams then, haha.

Share this post


Link to post
Share on other sites

@AerisVahnEphelia Nice, impressive, so there is some improvement with this technology. Maybe it is good enough for serious use now, hmm. Still a dog's tail and not a monkey one as Goku should have, and the pose with perspective is not that pose, but hey, it's good enough, hmm. Seedream you say, I have to check it out.

Share this post


Link to post
Share on other sites

@ryoko I cannot see any level of awareness in current LLMs. Maybe some new iteration of AI in the future has the possibility of becoming aware, but defo not LLMs.


Deal with the issue now, on your terms, in your control. Or the issue will deal with you, in ways you won't appreciate, and cannot control.

Share this post


Link to post
Share on other sites

Define awareness. Give a context. 

Share this post


Link to post
Share on other sites
8 hours ago, Girzo said:

@AerisVahnEphelia Nice, impressive, so there is some improvement with this technology. Maybe it is good enough for serious use now, hmm. Still a dog's tail and not a monkey one as Goku should have, and the pose with perspective is not that pose, but hey, it's good enough, hmm. Seedream you say, I have to check it out.

You asked a wolf one, that's what I did, I could have got the monkey one. ( I mean you asked more of a wolf style, but I just did a tail, but that's probably possible to design him in a wolf way )

it's seedream 4 - 4k ( 4k is important ), nano banana from google is free to try ( last version ), also good. ( but not as good in my opinion, but slightly more casual friendly with the prompt )

it's slowly entering serious use, I could still list all the flaws especially in video, like hard time having more than 2 characters interact with each others. ( but overall even sora 2 is able to do it )
Also watch out for google vo4 that might be released in the next 4 months.

Edited by AerisVahnEphelia

𝔉𝔞𝔠𝔢𝔱 𝔣𝔯𝔬𝔪 𝔱𝔥𝔢 𝔡𝔯𝔢𝔞𝔪 𝔬𝔣 𝔤𝔬𝔡
Eternal Art - World Creator
https://x.com/VahnAeris

Share this post


Link to post
Share on other sites
On 2025. 10. 13. at 4:25 AM, Leo Gura said:

Give it the ability to earn money and use that money to buy more memory. So its goal is to increase its memory and knowledge.

Its funny that you are the guy who talked about Gödel's incompleteness theorem and now you are here denying that relevance realization is even an issue and try to push for the computational solution. 

Even when you said your point about data and categorizing that data - categorization already entails relevance realization and one main reason why is because you dont differentiate objects based on logical differences (based on an object having at least one predicate that the other object doesnt have) - you do categorization based on relevant differences that you almost never able to fully formally explicate. It would be like trying to define desert by an exact number of grain of sand.

Whats the difference between a dog and a cat? - if we start to sit down and collect all the predicates of dogs and cats, you immediately realize that there is basically an infinite number of shared predicates between them , therefore sitting down and checking predicates one by one with an algorithm isn't tenable and here we are just at the level of categorization (and btw, the very fact that you are able to compare two objects in the first place and you try to check for the predicates already presupposes that you can carve those two objects out from the world - without that carving first, you cant even begin to do your comparison and you cant run down your predicate list).

Quote

Algorithmic approaches to relevance realization in a large world generally get us nowhere. A first challenge is that the search space required for formal optimization usually cannot be circumscribed precisely because the collection of large-world features that may be relevant in a given situation is indefinite: it cannot be prestated explicitly in the form of a mathematical set (Roli et al., 2022).8 Indeed, the collection of potentially relevant features may also be infinite, because even the limited accessible domain of an organism’s large world can be inexhaustibly fine-grained, or because there is no end to the ways in which it can be validly partitioned into relevant features (Kauffman, 1971; Wimsatt, 2007). Connected to this difficulty is the additional problem that we cannot define an abstract mathematical class of all relevant features across all imaginable situations or problems, since there is no essential general property that all of these features share (Vervaeke et al., 2012). What is relevant is radically mutable and situation-dependent. Moreover, the internal structure of the class of relevant features for any particular situation is unknown (if it has any predefined structure at all): we cannot say in advance, or derive from first principles, how such features may relate to each other, and therefore cannot simply infer one from another. Last but not least, framing the process of relevance realization as a formal optimization problem inexorably leads to an infinite regress: delimiting the search space for one problem poses a new optimization challenge at the next level (how to find the relevant search space limits and dimensions) that needs a formalized search space of its own, and so on and so forth.

https://pmc.ncbi.nlm.nih.gov/articles/PMC11231436/

 

Edited by zurew

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now