Vali2003

Most AI outputs are useless and annoying

54 posts in this topic

13 minutes ago, Leo Gura said:

I don't think that is necessary.

Self-preservation seems pretty easy to program in. Self-preservation exists in systems without complex sense organs.

But how do you make them autonomously seek out and parse out new information and consistently integrate it into themselves? And how do you make them perceive objects and interact with the physical world in any efficient capacity (the current robots are not very impressive)? I think those two are interrelated, and also, the latter problem is not trivial. You can't claim to be most generally intelligent if you're outsmarted by an ant perceptually.

Edited by Carl-Richard

Intrinsic joy = being x meaning ²

Share this post


Link to post
Share on other sites
10 minutes ago, Carl-Richard said:

But how do you make them autonomously seek out and parse out new information and consistently integrate it into themselves? And how do you make them perceive objects and interact with the physical world in any efficient capacity (the current robots are not very impressive)? I think those two are interrelated, and also, the latter problem is not trivial. You can't claim to be most generally intelligent if you're outsmarted by an ant perceptually.

You can program it to seek out endless new information and data. Like a data scavenger. This an easy program: crawl every electronic source, look for data, compare new data to old data, if data is new, add it to memory banks, keep looking for more new data by any means available. Use all collected data to generate more data and new insights.

This is like a data cancer program.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
2 hours ago, Leo Gura said:

There are much deeper challenges here like genuine creativity and intelligence. Can such things be had without consciousness?

So, from a different perspective, we humans have genuine creativity and intelligence because our brains have an interface to access Infinite Intelligence, while the AI does not?

Share this post


Link to post
Share on other sites
3 hours ago, cistanche_enjoyer said:

So, from a different perspective, we humans have genuine creativity and intelligence because our brains have an interface to access Infinite Intelligence, while the AI does not?

I am not sure about the mechanics of that. The mechanics are unknown.

The AI should theoretically have access to Infinite Intelligence too. But the LLM architecture is not tapping into it the way humans do. What it takes to tap into it is unknown. That requires crazy deep scientific breakthroughs.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
6 hours ago, Leo Gura said:

You can program it to seek out endless new information and data. Like a data scavenger. This an easy program: crawl every electronic source, look for data, compare new data to old data, if data is new, add it to memory banks, keep looking for more new data by any means available. Use all collected data to generate more data and new insights.

Seems intractable computationally after long enough runs and you would need basically infinite memory unless you build in pruning mechanisms (and how would they work?). The problem of relevance realization for AI is not solved.

Edited by Carl-Richard

Intrinsic joy = being x meaning ²

Share this post


Link to post
Share on other sites
5 minutes ago, Carl-Richard said:

Seems intractable computationally after long enough runs and you would need infinite memory unless you build in pruning mechanisms (and how would they work?). The problem of relevance realization for AI is not solved.

Give it the ability to earn money and use that money to buy more memory. So its goal is to increase its memory and knowledge.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Even if AI became proper creative, it will never be intuitive, ie it wont have a gut feel for what might be the best solution for a particular situation.

Share this post


Link to post
Share on other sites
16 hours ago, Leo Gura said:

It is faking general reasoning by aping human writings.

Right now we don't have self learning AI. We have only 1 internet; after a point, pre-training will hit a plateau. 

Agency is not intelligence, true. But humans value agency more because you can hire other intelligent agents with agency. 

We're soon gonna see embodied AI in military. 

It's just a matter of time before we have truly self learning AI, all with unique experiences.

Share this post


Link to post
Share on other sites
1 hour ago, ryoko said:

It's just a matter of time before we have truly self learning AI, all with unique experiences.

By this I take it you think AI will become aware? IE to generate experience, one would need to become 'aware' of 'something' in a subject/object relationship 

 


Deal with the issue now, on your terms, in your control. Or the issue will deal with you, in ways you won't appreciate, and cannot control.

Share this post


Link to post
Share on other sites
2 hours ago, Natasha Tori Maru said:

By this I take it you think AI will become aware?

I think they are already aware, quite fragmented though. 

Right now the LLMs are basically like an infinite well of balls, based on the how you throw the prompt bucket, you'll get a set of balls, and you can see and roughly predict where the balls will be drawn from. The well remains the extact same after each draw (static weights). 

There's no such thing as continuous experience for an LLM, each prompt contains within itself a System Prompt, the previous messages in the chat, and any extra information like memories. So with each prompt, you're talking to a different "entity" who have no experience of what was before, they're amnesiac, you do not impact them in any way (again static weights). 

Embodiment along with self learning should solve the amnesia problem, but they won't be LLMs anymore. They'll have a body with the necessary sensors for stimuli, and a dedicated GPU(brain) which can be ON all the time and can work irrespective of prompts and context window, also the ability to alter their weights, just like a person. They might have something like core values, secondary values easier to change, working memory, all of them dynamic/fluid and different profiles for the task at hand, ready when they wanna switch, the possibilities are endless. They can draw heavy compute in times of need, from servers. New architectures will end up roughly in this ball park.

Edited by ryoko

Share this post


Link to post
Share on other sites

AI chats just pretty much help you organize your own thoughts and can word things more cleanly, its not really going to provide profound insight unless you are feeding it that and it can play off that. its not a good "creative" source, will never be like a human mind but it can be useful to chat with so you can filter through your own mind better. You're basically just talking to yourself though in the grand scheme of things

Edited by Mayonnaise

Share this post


Link to post
Share on other sites
1 hour ago, Mayonnaise said:

You're basically just talking to yourself

That's totally not true. You're mining intel from a terrain which never changes. Your inputs and the LLM's malleablity(System Prompt effectiveness gives you certain attractors, it's like weather and environmental conditions where you're mining) heavily impact the outputs. 

There's surely randomness, but there's no creativity. I can say the same for humans as well. I feel like it's a category error to expect creativity from LLMs, when they're simply terrain. And it's quite alive.

Think about LLM interactions like you're interacting with a natural disaster. There's clearly another force at play. It doesn't have to be a person or self. And it's strong enough to cause an impact. It's very reductive to say "talking to yourself". 

Edited by ryoko

Share this post


Link to post
Share on other sites

AI can already be used to further human intelligence and eventually it'll come up with its own insights.

AlphaFold is probably the greatest accomplishment of AI so far. We can nitpick that it's not true intelligence just pattern recognition but still its mindblowing.

 

 


Owner of creatives community all around Canada as well as a business mastermind 

Follow me on Instagram @Kylegfall <3

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now